Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix pipelines with Kubeflow profile quota #5695

Merged
merged 1 commit into from
May 19, 2021

Conversation

juliusvonkohout
Copy link
Member

@Bobgy sorry for the inconvenience. Migrated from kubeflow/manifests#1886

Kupeflow profile quota support is broken in Kubeflow 1.3 because all pipelines fail. Someone forgot to update the configmap properly when updating Argo.

The out of sync Kubeflow Argo does not set ressource requests on the init and wait containers which is needed for Kubeflow profile quota support. Argo upstream does so properly as seen here:
https://github.com/argoproj/argo-workflows/blob/26f08c10ae3b88b3ee438cd1aba2ba1241e35cf9/docs/workflow-controller-configmap.yaml#L152

Take for example the profile from the Kubeflow documentation https://www.kubeflow.org/docs/components/multi-tenancy/getting-started/#manual-profile-creation that creates a quota which will prevent all pods from starting if they don't specify a CPU request. This also blocks the init and wait containers of Argo.

apiVersion: kubeflow.org/v1beta1
kind: Profile
metadata:
  name: profileName   # replace with the name of profile you want, this will be user's namespace name
spec:
  owner:
    kind: User
    name: [email protected]   # replace with the email of the user

  resourceQuotaSpec:    # resource quota can be set optionally
   hard:
     cpu: "2"
     memory: 2Gi
     requests.nvidia.com/gpu: "1"
     persistentvolumeclaims: "1"
     requests.storage: "5Gi"

You also need to add default CPU requests to the containerOp.

def print_text(text_path: InputPath()): # Kubeflows InputPath() supports strings and files
    import time
    time.sleep(15)
    '''Print file'''
    with open(text_path, 'r') as reader:
        for line in reader:
            print(line, end = '')

def pod_defaults(op):
    op.set_memory_request('10Mi');# op.set_memory_limit('1000Mi')
    op.set_cpu_request('10m');# op.set_cpu_limit('1000m')
    return op

def sum_pipeline(count: int = 100000):
...
    print_text_component = func_to_container_op(func=print_text, base_image=BASE_IMAGE)(sum_numbers_component.output)
    pod_defaults(print_text_component)
...

@google-oss-robot
Copy link

Hi @juliusvonkohout. Thanks for your PR.

I'm waiting for a kubeflow member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Bobgy
Copy link
Contributor

Bobgy commented May 19, 2021

Thank you for catching this problem!
/lgtm
/approve

@google-oss-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Bobgy

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@google-oss-robot google-oss-robot merged commit 87d9dd5 into kubeflow:master May 19, 2021
@juliusvonkohout
Copy link
Member Author

Thank you for catching this problem!
/lgtm
/approve

Thanks this is a strong candidate for 1.3.1 @Bobgy @yanniszark
kubeflow/manifests#1890

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants