-
Notifications
You must be signed in to change notification settings - Fork 787
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add support for helmfile in the apps model #6660
Conversation
@jstrachan Given the size and scope of this PR, would you mind making it WIP until it's been looked at in detail by people who are familiar with the relevant code/behavior to be very sure it won't cause any changes in the existing behavior? |
hmmm, what's about we do this experimental stuff on a feature branch? I am trying to understand how one would proceed if we decide "yet let's do this" or "no let's skip this". |
@hferentschik we are using a feature flag of |
/hold |
ae1d84b
to
61c4a4a
Compare
61c4a4a
to
882a03e
Compare
/retest |
1 similar comment
/retest |
pkg/cmd/opts/version.go
Outdated
} | ||
teamSettings := tc.TeamSettings() | ||
|
||
// lets default to local file system for the requirements as we are often invoked before we've created the cluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it not be better to default to the connected cluster. Otherwise I can envisage a situation where I execute the command inside a checkout of a enviroment but connected to a different cluster, and getting odd results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the main reason for this logic is that sometimes we are invoking this EnvironmentContext
before we have even created any Environment
resources (e.g. early jx
commands in the boot pipeline) so there's nothing to connect to really.
I've modified the comment to make this all a bit more clear:
Line 36 in 00c0cee
// lets default to local file system for the requirements as we being invoked |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't questioning why you would look on the local filesystem, just why it was the default.
For the case that we are early in the boot pipeline it's obvious it needs to look at the file system. In this case if we defaulted to the connected cluster it wouldn't find the resources there so would continue to look at the file system.
However after boot it would then default to the connected cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got it - yeah I've added a boolean flag now to indicate which mode to use so its more clear
Couple of minor comments but looks good to me. The apps stuff is well tested already so I'm reasonably confident this won't break existing functionality. |
925fb49
to
00c0cee
Compare
/hold cancel |
// | ||
// if preferRequirementsFile is true we look for the local `jx-requirements.yml` file first and then fall back to the | ||
// development environment settings; otherwise we try to use the requirements from the environment if present | ||
func (o *CommonOptions) EnvironmentContext(dir string, preferRequirementsFile bool) (*envctx.EnvironmentContext, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we seem to mix up team context and environment context - will be more understandable with a single phrase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I couldn't think of a better name TBH ;) - its a combination of Environment/TeamSettings/VersionResolver/RequirementsConfig - which are mostly associated with an environment. For single clusters its the dev environment which owns the team settings/requirements. For remote Staging/Production it has its own EnvironmentContext.
Any better ideas?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one day it might be nice to combine TeamSettings/Requirements into a single object and be able to create it from an existing cluster or - if in pre-install stages inside boot commands - from disk. Though this PR is already big enough so didn't wanna do any more large refactors any time soon...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me propose changes here :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok - changes suggested - just var name really.
|
||
exists := false | ||
if preferRequirementsFile { | ||
// lets default to local file system for the requirements as we being invoked |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to default to the team settings - as I would always want to default to the cluster I'm connected to, not the thing I have checked out on disk. Or maybe even better is to detect the discrepency and ask people if they want to continue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what if there's no Environment/TeamSettings yet as the cluster is totally empty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then it should go to the fallback of reading from disk. I'm not ptoposing NOT reading from disk, I'm just saying make it the backup.
So does this mean I am also wondering why we need to integrate this right away into Feature flag or not, I still feel this is quite invasive as it stands. I also would like to better understand how this will fit into the bigger picture of the planned boot refactoring. |
just checking, what's the planned boot refactoring? If you mean the enhancement proposal for boot apps this is all related. https://github.com/jenkins-x/enhancements/tree/master/proposals/2 |
@hferentschik see the docs for the UX https://github.com/jenkins-x/enhancements/blob/master/proposals/2/docs/apps.md order doesn't really matter really; but if folks really wanted they could manually edit the it knows where to add things as with |
@hferentschik where are the design docs for the planned boot refactoring? |
New changes are detected. LGTM label has been removed. |
@hferentschik I just re-pushed the changes without those |
@hferentschik any idea what is needed to get past your ‘hold’ |
Here are my thoughts on Boot w/ Helmfile. Most important upfront, I also think Helmfile is the way forward. FAICT it solves many of our current issues. My two main concerns are around apps.yaml and the use of the version stream to store the Helm Chart values. Regarding apps.yaml I am wondering what we gain by using it if we at the same time still generate and expose helmfile.yaml. IMO adding apps.yaml adds an additional level of abstraction which makes it harder for the user as well as development. helmfile.yaml might be a complex configuration file, but if we decide we want to expose this to the user it makes sense to do this consistently. The alternative for me would be to say that we have our own config with the plan to not expose helmfile.yaml altogether. I don’t think, however, that this is what we are after here. Regarding the use of the version stream, I am wondering whether it would not be better to inline the config into the Helmfile. It probably gets quite big and unwieldy to work with, but at the same time, one can easily reason about what exactly gets applied into the cluster. Personally, I am so much concerned about the value files being in the dev repository, my concern is that it is not really clear how and where they are used. This is imo resolved with Helmfile where the values appear next to the chart and version they are used in. Last but not least, I am concerned about the fact that when we experiment, we have a tendency to expose things to the user before having done a couple of rounds of refinements. For example, the changes to the version stream are already pushed to master now meaning they will go out to the users. For me, the version stream is part of the “public API” and for a user, it gets confusing what the various pieces of information mean. Maybe the work @abayer is up to can help there. |
Much appreciated, thanks. |
@@ -86,7 +86,7 @@ func (d Deployment) URL(kc kubernetes.Interface, a Application) string { | |||
return url | |||
} | |||
|
|||
// GetApplications fetches all Applications | |||
// GetApplications fetches all Apps |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So GetApplication
so far where the user's "applications" right? Now you want to call this "Apps" and unify this with the current Apps plugins and all become apps? We really are in a bit of a naming mess here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see https://github.com/jenkins-x/enhancements/tree/master/proposals/1 but I'm also hoping to submit a further enhancement to address the mess of jx get app
v jx get applications
but that would be another enhancement + PR etc...
@@ -61,6 +60,9 @@ var ( | |||
# Add an app | |||
jx add app jx-app-jacoco | |||
|
|||
# Add an app using a chart repository prefix from the version stream 'charts/repositories.yml' file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here things get confusing for me. So far this command is documented as being responsible for installing "apps" which are similar to "addons". So without any further context, I read this as "install an addon from a chart repository prefix from the version stream 'charts/repositories.yml' file". And not sure what the second part really means.
Do you have a plan how we get out of this naming mess?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the plan is to always use app and remove addon: https://github.com/jenkins-x/enhancements/tree/master/proposals/1
@@ -82,68 +85,109 @@ func NewCmdCreateHelmfile(commonOpts *opts.CommonOptions) *cobra.Command { | |||
|
|||
// Run implements the command | |||
func (o *CreateHelmfileOptions) Run() error { | |||
|
|||
apps, err := config.LoadApplicationsConfig(o.dir) | |||
apps, _, err := config.LoadAppConfig(o.dir) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realise that _ CreateHelmfileOptions_ already exists now, but it would have been nice to implement this new Helmfile support outside of this Options framework. One thing we need to get away from is this whole Option hierarchy thing.
if err != nil { | ||
return errors.Wrap(err, "failed to generate system helmfile") | ||
} | ||
|
||
return nil | ||
} | ||
|
||
func (o *CreateHelmfileOptions) generateHelmFile(applications []config.Application, err error, localHelmRepos map[string]string, apps *config.ApplicationConfig, phase string) error { | ||
// contains the repo url and name to reference it by in the release spec | ||
func (o *CreateHelmfileOptions) generateHelmFile(ec *envctx.EnvironmentContext, helmPrefixes *versionstream.RepositoryPrefixes, applications []config.App, charts map[string]*envctx.ChartDetails, err error, localHelmRepos map[string]string, apps *config.AppConfig, phase string) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's a hell of a function. IMO this would require decomposition.
I took the hold off. |
@hferentschik on your thoughts on we did initially wonder about just using vanilla helmfiles directly; there were discussions on the issue + proposal to use either separate files or one file with label selectors to represent the system / apps phases. the main reason we've gone, for now, with
|
@hferentschik on the version stream comment: So one of the issues with all those values binding files in the current boot is they get copied into the users development environment git repository. They should be versioned somewhere in, say, git and shared across most installations. Inlining them inside the helmfile would lead to copy/paste of these complex files and lead to all the git rebase/merge/cherry pick issues. So we really want to put them somewhere shared thats not the dev git repository - though leave the ability for folks to add their own custom value files if they want (but most folks don't need to). The idea of using the version stream is so we have a single git repository that can verify changes to the values files and/or chart versions and/or container images and/or packages and/or quickstarts. So a single, simple place to make changes and verify they still work. e.g. imagine if a new version of tekton needs a new values binding file. We can make a trivial PR on the version stream and verify both the new version + the new values binding file in 1 PR with the associated BDD tests. We could look at moving the values file to another git repository; but then we'd have to do 2 PRs on 2 different places that gets more complex. So in terms of making it easier to make changes in a consistent way; using 1 git repository for all of these things feels the best approach for now. But we can review and improve this further if needed. BTW folks can totally opt out of the version stream completely and version everything by hand in their |
Totally understood. I'd love a way to mark enhancements/features in the CLI and inside CRDs / We need to be able to incrementally build and release features by following the best practices of the Accelerate book (trunk based development, working in small batches, no long term feature branches etc) - but we also need a way to hide things from users; or at least make it clear which bits are alpha/beta/GA and so forth. Something a little like the |
@jstrachan, thanks for all the clarifications. One thing I might have misunderstood is what we do with the generated Helmfiles. I thought they would be after initial generation be part of version control, but that's not the case, right? The Helmfile is generated on the fly and just used temporarily. Only jx-apps.yaml is under version control, right?
I guess that would still worry me. I don't really think that its either jx-apps.yaml or Helmfile. Not if one wants a fully working solution all the way down. I think we are (and need to) commit to one approach. |
@hferentschik totally agree, I'd prefer the one approach. Right now the generated helmfiles are never checked in so they are an invisible implementation detail to end users really (unless someone is working underneath the covers). Also if we move towards a future where the boot pipeline only runs server side; you'd never see the generated helmfiles on your local laptop (you'd have to |
Is it really such a bad thing that they are copied. I don't have a problem with that. It's transparent at least whereas your approach has several levels of indirection which makes it hard(er) to reason about things or even to understand how things fit together. It is a trade-off really.
I think we could get this sorted. I was also wondering how having the value bindings in the version stream affects the upgrade process. Even though not ideal, at least conceptually we are able to detect conflicts and get the user to resolve them. If the value bindings are in the version stream and all the upgrade does is advance the version stream ref, we have no way of telling whether this would conflict with any potential local changes/overrides the user made. WDYT?
I don't think this is a great solution either. I guess version stream or inlining are the real options here.
I just wish we would contain this initial review and improve cycles to a smaller set of people (eg the core contributors) opposed to releasing each out into the wild. |
Which is partly worrying me as well. One problem I think we are facing is that we always say that you can also X instead of Y etc. Suddenly you have to support and maintain a multiple of different approaches. |
If we were to treat Helmfile as implementation details would we also not mention any Helmfile specifics in the docs? So jx-apps.yaml is our "API" and Helmfile an implementation detail? That would basically be one of the two possible approaches I have been mentioning earlier. |
One thing I like about the generated Helmfile is that it makes explicit how things fir together in terms of Chart + value binding + secrets, etc. jx-apps.yaml hides that again and leaves me again scratching my head on where and how things are getting expanded and combined. |
My concern at the moment is that we do not have a way to upgrade users from a boot based cluster to a boot & helmfile based cluster. Ideally a user would not notice the change and it would just happen as part of |
Yes it is. The reason I didn't want to copy them from the boot config into the git repository of every Dev / Staging / Production git repository that anyone ever creates is that just introduces a maintenance nightmare. Every time an upstream chart needs its binding changed (e.g. a change/new values.yaml fbit of config) or we introduce a new bit of config, add a new field to The current design in the enhancement proposal leaves all the system binding files in the version stream and the only source code that goes into each Dev / Staging / Production git repository is all 100% local configuration - so no copy/paste ever needed nor any git rebase/cherry pick/merge hell needs to be forced on our end users.
it lets us update the chart versions / images and binding files consistently in 1 PR - thats good right?
I think you've missed my point. We don't want users git merging complex YAML files - we want to make it really easy to reliably upgrade/downgrade Jenkins X by mere mortals - or simple tooling that works reliably.
Its really hard to get feedback in an open source project without doing things in public. Thats kinda the point of OSS :) Though it would be nice to be able to mark commands / CLI arguments / jx-requirements.yml flags as experimental / alpha / beta / GA so its easier to experiment upstream without worrying about affecting beta/GA stuff .e.g. like |
Given that all the commands like If folks want to go off plan and do their own additional pure helmfile stuff in the pipeline, then fine, they are on their own - they don't get promote / boot upgrades / version streams / apps behaviour on those custom helmfile things - but maybe thats OK for some users. |
Yes - End users adding/removing apps or promoting apps and configuring local or remote Staging/Production environments don't need to know anything about helmfile; so there's no real need to document it there. We've always used the same approach in Jenkins X. An end user creating quickstarts that deploy to kubernetes don't need to know about A good example of this approach in action is helm charts themselves - they describes what you need to put in your The Jenkins X Contributors guide can peel back the curtain and go into graphic detail of how to debug apps via helmfile + the complexities of helmfiles' values files injection with paths + wildcards + templating, inheritence and all the like - but thats for contributors + app developers; not end users building cloud native apps to deploy to kubernetes. |
there's a massive difference between:
We don't have to break (1) to fix (2). e.g. look at tools like i.e. if a developer wants to see how all this stuff works we can have a boot command to explain how things all work to help them. As we've already mentioned in #6672 - it would be awesome for each PR on any Environment git repository using the new helmfile / helm 3 stuff to generate a PR comment with the diff (created via |
@jstrachan are you able to provide an update on the status on this PR and #6664? Is this a feature we can start playing around with or not quite yet? |
I'm gonna close this PR for now - we're using this approach for folks to test out helm3 / helmfile: https://jenkins-x.io/docs/labs/boot/ |
this PR extends the current apps commands to work with helmfile if its enabled in the requirements (via
helmfile: true
and if the dev environment is using the helmfile file layout (i.e. usingjx-apps.yml
)for background of the requirements and design see: https://github.com/jenkins-x/enhancements/tree/master/proposals/2 and issue: #6442
the user guide is here: https://github.com/jenkins-x/enhancements/blob/master/proposals/2/docs/README.md
in particular notes on how this works with the current apps framework see: https://github.com/jenkins-x/enhancements/blob/master/proposals/2/docs/apps.md and https://github.com/jenkins-x/enhancements/blob/master/proposals/2/docs/how-it-works.md
Summary of changes
this PR unfortunately touches quite a few files as the app logic to add/update/remove/get is spread across a number of files. Most of the files changed are actually test data files; in terms of go files its 30 files changed:
https://github.com/jenkins-x/jx/pull/6660/files?file-filters%5B%5D=.go mostly around the (get|add|update|delete)_apps related code and the associated gitops helper files.
To simplify the code I added the
envctx
package which has anEnvironmentContext
helper class that wraps up the code for creating theRequirementsConfig / TeamSettings / VersionResolver / GitOps flag
to simplify the code and avoid copy/paste between the apps commands.There's a few changes in the
config
package mostly around theAppConfig*
structs and associated code which are only used right now in thejx step create helmfile
code for helmfile