-
Notifications
You must be signed in to change notification settings - Fork 478
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose Customization Options for Migration Job in Helm Chart's values.yaml #24033
Comments
Hey @lukeheath, @allenhouchins and I looked at this request during the unpacking the why ritual. The why is clear: they're trying to use Fleet's best practice Helm chart deploy Fleet in GCP. What we don't understand is whether this is a bug or a feature request. What do you think? Also, it looks like we're maintaining two different Helm charts:
We want to only maintain one right? |
@rfairburn is the best person to review and speak to this. |
Some background I found when when Sean first asked me about this in Slack:
Kubernetes and Helm are a bit different and I think the reason you have 2 is due to the history of Helm. At this point, IMO Helm is better for this and the Helm chart has a lot more functionality than the normal k8s item. |
@noahtalerman Maybe a quick call with you, @allenhouchins, and @rfairburn makes sense? Feel free to include me if I can be helpful. |
this makes 100% sense and I believe I can do this very quickly if we use the same proxy options as the primary fleet container. Even if we have separate entries in values.yaml, it shouldn't be terrible to implement. |
https://github.com/fleetdm/fleet/tree/main/docs/Deploy/_kubernetes is direct yaml and is likely out-of-date at this point. For helm, the other (chart) location in the git repo is what is used to populate everything. |
@rfairburn if it's quick I say go for it. I added this to #g-customer-success board so y'all can prioritize.
Also, sounds like we want to get rid of this YAML if it's out of date and we have a different chart. |
#24412 is an initial draft that adds what I see as the main customization that would apply to migrations (the cloudsql sidecar). Per the PR notes, all of the sa/rbac stuff was moved to be created at init time so it is in place to be leveraged by the migration job. This still needs to be tested in gke for validation, but looks to pass dry-run applies locally. |
Migration job's voice heard, |
Problem
Currently, the Helm chart provides various customization options in the values.yaml file for the deployment resource, such as communicating with the MySQL DB via a CloudSQL proxy sidecar container.
However, similar customization options are not available for the migration job resource, even though both the deployment and job would likely need to communicate with the database using the same mechanisms.
What have you tried?
We are deploying Fleet in GCP. We import the helm chart, and deploy it with customizations suitable for our environment. Our application ultimately fails because it gets stuck on the job. The job times out, and when looking at the failed pod's log, it shows:
Failed to start: creating db connection: dial tcp 127.0.0.1:3306: connect: connection refused
When inspecting the job's pod's manifest, we can see that there is no proxy sidecar container, even though
enableProxy
from values.yaml is true, and the sidecar container is defined in the deployment's manifest.Potential solutions
What is the expected workflow as a result of your proposal?
I should be able to use the helm chart, and have it deploy successfully only by modifying the exposed customization options via the values.yaml file.
The text was updated successfully, but these errors were encountered: