-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Include override configuration for Prefect Agent #813
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additionally there need to be docs on how to specify the overrides within the qhub-config.yaml in the configuration docs.
qhub/template/{{ cookiecutter.repo_directory }}/infrastructure/kubernetes.tf
Outdated
Show resolved
Hide resolved
...{{ cookiecutter.repo_directory }}/infrastructure/modules/kubernetes/services/prefect/main.tf
Show resolved
Hide resolved
I just need to add a quick doc about this and we are ready to go 👍 . I've made a simple test with local deployment, this is the structure present on qhub-config: prefect:
enabled: true
overrides:
prefect_agent:
prefectLabels: [dev, prod]
job:
resources:
requests:
memory: 20Gi and this one is the output result on k9s: Environment:
│ PREFECT__CLOUD__AGENT__LABELS: ["dev","prod"] │
│ JOB_MEM_REQUEST: 20Gi
currently the only overrideble variables are:
I can include the secrets as well, this way the default values will be present in |
{{- range $k, $v := .Values.envVars }} | ||
- name: {{ $k }} | ||
value: {{ $v -}} | ||
{{ end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@viniciusdc I added these lines. They allow you to add arbitrary environment variables to the prefect agent. This was one thing that was requested by jhamman, and may simplify this PR. This also allows the users to specify any environment variable including others that could affect prefect without requiring us to make a PR to qhub in the future.
E.g. A values file like the one below . . .
# values.yaml
prefectImage: <placeholder>
namespace: <placeholder>
serviceAccount: prefect
prefectToken: "<placeholder>"
cloudApi: "<placeholder>"
env:
MY_ENV_VAR: my_env_var_value
MY_ENV_VAR2: my_env_var_value2
MY_ENV_VAR3: my_env_var_value3
. . . would produce the following kubernetes manifest.
# Source: prefect/templates/prefect.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: prefect-agent
name: prefect-agent
spec:
replicas: 1
selector:
matchLabels:
app: prefect-agent
template:
metadata:
labels:
app: prefect-agent
spec:
serviceAccountName: prefect
containers:
- args:
- prefect agent start kubernetes -e JUPYTERHUB_API_TOKEN=$JUPYTERHUB_API_TOKEN
command:
- /bin/bash
- -c
env:
- name: MY_ENV_VAR
value: my_env_var_value
- name: MY_ENV_VAR2
value: my_env_var_value2
- name: MY_ENV_VAR3
value: my_env_var_value3
# . . . rest of file would be below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh that's nice! thanks for the pointer Adam
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My only concern with that is related with the variables that need use toJson
when included in the manifest, e.g. PREFECT__CLOUD__AGENT__LABELS
which is a list and IMAGE_PULL_SECRETS
which is a dictionary containing the secrets. How would I handle that using the proposed env structure above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I imagine you could get the desired format by using back slashes. If not, I think it's reasonable to still keep the environment variables that need special formatting in there.
docs/source/admin_guide/prefect.md
Outdated
- IMAGE_PULL_POLICY | ||
``` | ||
|
||
For example, if you just want to override the amount of CPU limits for each job, you would need to craft a declarative configuration, in you qhub-config.yaml file, as follows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: you
should be your
...directory }}/infrastructure/modules/kubernetes/services/prefect/chart/templates/prefect.yaml
Outdated
Show resolved
Hide resolved
So based on @Adam-D-Lewis comment and some talks I had with him and @costrouc about the agent helm chart as a whole. I've decided to maintain the overrides values from the overrides:
extraVars:
PREFECT__CONTEXT__SECRETS__AWS_CREDENTIALS: eyJBQ0NFU1NfS0VZIjogImFiY2RlZiIsICJTRUNSRVRfQUNDRVNTX0tFWSI6ICJnaGlqa2xtbiJ9Cg==
prefect_agent:
prefectLabels: [dev, prod]
job:
resources:
requests:
memory: 20Gi where in this case we will override the The result can be seen in the cluster deployment for the agent: PREFECT__CLOUD__AGENT__LABELS: ["dev","prod"]
JOB_MEM_REQUEST: 20Gi
PREFECT__CONTEXT__SECRETS__AWS_CREDENTIALS: {"ACCESS_KEY": "abcdef", "SECRET_ACCESS_KEY": "ghijklmn"} |
For extraVars, we'd need to add some documentation explaining it. Also, maybe call it envVars or extraEnvVars to make it clear that these are environment variables? |
I've made some adjustments that don't require base64 encoding of environment variables. Check it out, @viniciusdc and let me know if this meets all your needs. I made a repo where I was testing out a local deployment of this branch if that's helpful for you. You can access it at https://github.com/Quansight/grafana-prometheus-qhub/tree/prefect_overrides |
Hi @Adam-D-Lewis thank you so much for that! I noticed that if we pass a dic as a key in the overrides envVars, e.g Also, I've included an extra |
...directory }}/infrastructure/modules/kubernetes/services/prefect/chart/templates/prefect.yaml
Outdated
Show resolved
Hide resolved
Looks great! Thanks @viniciusdc and @Adam-D-Lewis for working hard on this PR. |
No description provided.