Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add support for helmfile in the apps model #6660

Closed
wants to merge 20 commits into from

Conversation

jstrachan
Copy link
Member

@jstrachan jstrachan commented Jan 29, 2020

this PR extends the current apps commands to work with helmfile if its enabled in the requirements (via helmfile: true and if the dev environment is using the helmfile file layout (i.e. using jx-apps.yml)

for background of the requirements and design see: https://github.com/jenkins-x/enhancements/tree/master/proposals/2 and issue: #6442

the user guide is here: https://github.com/jenkins-x/enhancements/blob/master/proposals/2/docs/README.md

in particular notes on how this works with the current apps framework see: https://github.com/jenkins-x/enhancements/blob/master/proposals/2/docs/apps.md and https://github.com/jenkins-x/enhancements/blob/master/proposals/2/docs/how-it-works.md

Summary of changes

this PR unfortunately touches quite a few files as the app logic to add/update/remove/get is spread across a number of files. Most of the files changed are actually test data files; in terms of go files its 30 files changed:
https://github.com/jenkins-x/jx/pull/6660/files?file-filters%5B%5D=.go mostly around the (get|add|update|delete)_apps related code and the associated gitops helper files.

To simplify the code I added the envctx package which has an EnvironmentContext helper class that wraps up the code for creating the RequirementsConfig / TeamSettings / VersionResolver / GitOps flag to simplify the code and avoid copy/paste between the apps commands.

There's a few changes in the config package mostly around the AppConfig* structs and associated code which are only used right now in the jx step create helmfile code for helmfile

pkg/config/application_defaults.go Outdated Show resolved Hide resolved
pkg/cmd/opts/version.go Show resolved Hide resolved
pkg/environments/gitops.go Outdated Show resolved Hide resolved
@abayer
Copy link
Contributor

abayer commented Jan 29, 2020

@jstrachan Given the size and scope of this PR, would you mind making it WIP until it's been looked at in detail by people who are familiar with the relevant code/behavior to be very sure it won't cause any changes in the existing behavior?

@hferentschik
Copy link

hmmm, what's about we do this experimental stuff on a feature branch? I am trying to understand how one would proceed if we decide "yet let's do this" or "no let's skip this".

@jstrachan
Copy link
Member Author

jstrachan commented Jan 29, 2020

@hferentschik we are using a feature flag of helmfile: true on jx-requirements.yml to isolate all changes from anyone not using helmfile / helm 3 - just like we’ve always done throughout the lifetime of jx as we introduced knative / Tekton / boot etc

@jstrachan
Copy link
Member Author

/hold
am waiting for @pmuir to review the changes to the apps code

@pmuir pmuir changed the title fix: add support for helmfile in the apps model feat: add support for helmfile in the apps model Jan 29, 2020
@jstrachan
Copy link
Member Author

/retest

1 similar comment
@jstrachan
Copy link
Member Author

/retest

}
teamSettings := tc.TeamSettings()

// lets default to local file system for the requirements as we are often invoked before we've created the cluster
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it not be better to default to the connected cluster. Otherwise I can envisage a situation where I execute the command inside a checkout of a enviroment but connected to a different cluster, and getting odd results.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the main reason for this logic is that sometimes we are invoking this EnvironmentContext before we have even created any Environment resources (e.g. early jx commands in the boot pipeline) so there's nothing to connect to really.

I've modified the comment to make this all a bit more clear:

// lets default to local file system for the requirements as we being invoked

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't questioning why you would look on the local filesystem, just why it was the default.

For the case that we are early in the boot pipeline it's obvious it needs to look at the file system. In this case if we defaulted to the connected cluster it wouldn't find the resources there so would continue to look at the file system.

However after boot it would then default to the connected cluster.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it - yeah I've added a boolean flag now to indicate which mode to use so its more clear

pkg/cmd/opts/version.go Outdated Show resolved Hide resolved
@pmuir
Copy link
Contributor

pmuir commented Jan 30, 2020

Couple of minor comments but looks good to me. The apps stuff is well tested already so I'm reasonably confident this won't break existing functionality.

@jstrachan
Copy link
Member Author

/hold cancel

//
// if preferRequirementsFile is true we look for the local `jx-requirements.yml` file first and then fall back to the
// development environment settings; otherwise we try to use the requirements from the environment if present
func (o *CommonOptions) EnvironmentContext(dir string, preferRequirementsFile bool) (*envctx.EnvironmentContext, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we seem to mix up team context and environment context - will be more understandable with a single phrase.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't think of a better name TBH ;) - its a combination of Environment/TeamSettings/VersionResolver/RequirementsConfig - which are mostly associated with an environment. For single clusters its the dev environment which owns the team settings/requirements. For remote Staging/Production it has its own EnvironmentContext.

Any better ideas?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one day it might be nice to combine TeamSettings/Requirements into a single object and be able to create it from an existing cluster or - if in pre-install stages inside boot commands - from disk. Though this PR is already big enough so didn't wanna do any more large refactors any time soon...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me propose changes here :-)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok - changes suggested - just var name really.


exists := false
if preferRequirementsFile {
// lets default to local file system for the requirements as we being invoked
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better to default to the team settings - as I would always want to default to the cluster I'm connected to, not the thing I have checked out on disk. Or maybe even better is to detect the discrepency and ask people if they want to continue?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if there's no Environment/TeamSettings yet as the cluster is totally empty?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then it should go to the fallback of reading from disk. I'm not ptoposing NOT reading from disk, I'm just saying make it the backup.

@hferentschik
Copy link

So does this mean jx add app is now adding to helmfile if helmfile support is enabled? How does it know where to add things? Is not part of the helmfile idea that order matters?

I am also wondering why we need to integrate this right away into jx apps [get|add|remove]? Can we not go for a manual edit of the helmfile instead for now?

Feature flag or not, I still feel this is quite invasive as it stands. I also would like to better understand how this will fit into the bigger picture of the planned boot refactoring.

@rawlingsj
Copy link
Member

I also would like to better understand how this will fit into the bigger picture of the planned boot refactoring

just checking, what's the planned boot refactoring? If you mean the enhancement proposal for boot apps this is all related. https://github.com/jenkins-x/enhancements/tree/master/proposals/2

@jstrachan
Copy link
Member Author

@hferentschik see the docs for the UX https://github.com/jenkins-x/enhancements/blob/master/proposals/2/docs/apps.md

order doesn't really matter really; but if folks really wanted they could manually edit the jx-apps.yml at some later stage.

it knows where to add things as with helmfile: true enabled in the requirements we use jx-apps.yml instead of env/requirements.yaml when adding/removing apps. See https://github.com/jenkins-x/enhancements/blob/master/proposals/2/docs/how-it-works.md

@jstrachan
Copy link
Member Author

@hferentschik where are the design docs for the planned boot refactoring?

@jenkins-x-bot
Copy link
Contributor

New changes are detected. LGTM label has been removed.

@jstrachan
Copy link
Member Author

@hferentschik I just re-pushed the changes without those go.mod commits. Anything else I can do convince you to remove the /hold?

@jstrachan
Copy link
Member Author

@hferentschik any idea what is needed to get past your ‘hold’

@hferentschik
Copy link

Here are my thoughts on Boot w/ Helmfile.

Most important upfront, I also think Helmfile is the way forward. FAICT it solves many of our current issues.

My two main concerns are around apps.yaml and the use of the version stream to store the Helm Chart values.

Regarding apps.yaml I am wondering what we gain by using it if we at the same time still generate and expose helmfile.yaml. IMO adding apps.yaml adds an additional level of abstraction which makes it harder for the user as well as development. helmfile.yaml might be a complex configuration file, but if we decide we want to expose this to the user it makes sense to do this consistently. The alternative for me would be to say that we have our own config with the plan to not expose helmfile.yaml altogether. I don’t think, however, that this is what we are after here.
Plain helmfile.yaml feels like the natural first step. One can then evaluate later whether it makes sense to add another layer of abstraction.

Regarding the use of the version stream, I am wondering whether it would not be better to inline the config into the Helmfile. It probably gets quite big and unwieldy to work with, but at the same time, one can easily reason about what exactly gets applied into the cluster. Personally, I am so much concerned about the value files being in the dev repository, my concern is that it is not really clear how and where they are used. This is imo resolved with Helmfile where the values appear next to the chart and version they are used in.
I am also not quite sure what the suggested approach of using the version stream implies for the upgrade process. I guess upgrade would mean updating the version stream ref, but there is now no telling what changes are coming in from the version stream and whether they potentially conflict with the users custom configuration. The current “cherry picking commits” addresses this part even though it is broken in its current form.
I think it might be better to have everything fully versioned in the Helmfile ( a bit like a fully resolved pom.xml after a release). Upgrade wise we might then still need something along the lines of the current git approach, but provided we fix the underlying bugs there is a way to give some more information to the user on what changes.
Generally, I feel that we start overusing the version stream by adding too many things into it.

Last but not least, I am concerned about the fact that when we experiment, we have a tendency to expose things to the user before having done a couple of rounds of refinements. For example, the changes to the version stream are already pushed to master now meaning they will go out to the users. For me, the version stream is part of the “public API” and for a user, it gets confusing what the various pieces of information mean. Maybe the work @abayer is up to can help there.

@hferentschik
Copy link

I just re-pushed the changes without those go.mod commits

Much appreciated, thanks.

@@ -86,7 +86,7 @@ func (d Deployment) URL(kc kubernetes.Interface, a Application) string {
return url
}

// GetApplications fetches all Applications
// GetApplications fetches all Apps

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So GetApplication so far where the user's "applications" right? Now you want to call this "Apps" and unify this with the current Apps plugins and all become apps? We really are in a bit of a naming mess here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see https://github.com/jenkins-x/enhancements/tree/master/proposals/1 but I'm also hoping to submit a further enhancement to address the mess of jx get app v jx get applications but that would be another enhancement + PR etc...

@@ -61,6 +60,9 @@ var (
# Add an app
jx add app jx-app-jacoco

# Add an app using a chart repository prefix from the version stream 'charts/repositories.yml' file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here things get confusing for me. So far this command is documented as being responsible for installing "apps" which are similar to "addons". So without any further context, I read this as "install an addon from a chart repository prefix from the version stream 'charts/repositories.yml' file". And not sure what the second part really means.

Do you have a plan how we get out of this naming mess?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -82,68 +85,109 @@ func NewCmdCreateHelmfile(commonOpts *opts.CommonOptions) *cobra.Command {

// Run implements the command
func (o *CreateHelmfileOptions) Run() error {

apps, err := config.LoadApplicationsConfig(o.dir)
apps, _, err := config.LoadAppConfig(o.dir)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realise that _ CreateHelmfileOptions_ already exists now, but it would have been nice to implement this new Helmfile support outside of this Options framework. One thing we need to get away from is this whole Option hierarchy thing.

if err != nil {
return errors.Wrap(err, "failed to generate system helmfile")
}

return nil
}

func (o *CreateHelmfileOptions) generateHelmFile(applications []config.Application, err error, localHelmRepos map[string]string, apps *config.ApplicationConfig, phase string) error {
// contains the repo url and name to reference it by in the release spec
func (o *CreateHelmfileOptions) generateHelmFile(ec *envctx.EnvironmentContext, helmPrefixes *versionstream.RepositoryPrefixes, applications []config.App, charts map[string]*envctx.ChartDetails, err error, localHelmRepos map[string]string, apps *config.AppConfig, phase string) error {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's a hell of a function. IMO this would require decomposition.

@hferentschik
Copy link

any idea what is needed to get past your ‘hold’

I took the hold off.

@jstrachan
Copy link
Member Author

jstrachan commented Feb 10, 2020

@hferentschik on your thoughts on jx-apps.yml:

we did initially wonder about just using vanilla helmfiles directly; there were discussions on the issue + proposal to use either separate files or one file with label selectors to represent the system / apps phases.

the main reason we've gone, for now, with jx-apps.yml is that:

  • its the simplest possible change for now in terms of promotion or add/remove app. i.e. instead of changing env/requirements.yaml we change jx-apps.yml and the files are very similar
  • its the simplest possible end user UX change. i.e. users don't need to know anything about helmfile to understand what apps are installed; there's just 1 file for their apps and its like a helm requirements file.
  • it turns out real helmfiles get quite big quite quick; so the jx-apps.yml is a nicer UX for end users (simpler to grok & less noise)
  • even going pure helmfile we know we need at least 2 of them (one for system and one for apps); or to use label selectors. It turns out the latter was clumsy to use. So even the pure helmfile approach was much more complex from day one
  • in tools/commands/UIs trying to grok what apps will be installed by boot by going pure helmfile is pretty hard; we'd need to scan the entire tree of files (helmfiles can include other files via wildcards + templating + inheritence), interpret every helmfile + its label selectors + interpret the jx pipeline and try guess all the times we'd run a helmfile sync in what directories to guess what apps we think will be installed ahead of time. With jx-apps.yml we know - via a single, simple, declarative file thats trivial to load, what is going to happen. This means we can, over time, do a much better job on validation. e.g. if webhook is lighthouse, we can verify/suggest/fail if the lighthouse app is not in jx-apps.yaml. We can do similar things around detecting what secrets are required for the users defined set of random apps they boot. Being able to introspect the apps & charts that are going to be installed and their phase is going to increasingly be very useful as we improve the tooling around configuration, secrets and ultimately build a lovely UI around adding/removing/editing/customising apps
  • whether we go pure helmfile only or use jx-apps.yml we already need to generate/enrich/post-process the helmfiles to add version stream version information and extra default metadata + binding files. Its simpler to generate the helmfiles from something thats not a helmfile early on in the pipeline as it means we don't modify something thats checked into git by accident. e.g. we had this issue when we previously added default chart versions from the version stream into the current env/requirements.yaml; it was way too easy for folks to commit the version stream versions by accident which then breaks the version stream as the user has now hard coded the version.
  • even though we've gone, for now, with jx-apps.yml as it feels the simplest for users, the implementation and tooling, the helmfile work is currently alpha. We can decide to go pure helmfile later before beta/GA. Whats most important is folks try it really and give feedback. Lets decide after lots of feedback which approach feels the best; I've worked a lot with the jx-apps.yml approach so far and it really feels right (despite me wanting to go 100% pure helmfile initially). I totally understand your reservations though. Lets keep a close eye on feedback on either side of this one and decide before beta/GA?
  • the use of the jx step create helmfile could be disabled by anyone trivially via the usual Jenkins X Pipeline override mechanism; so if someone really really wanted to go pure helmfile they could and boot would just work; its just the promote/add app/remove app/validation/tooling would all struggle.
  • by using jx-apps.yml we cut down massively the surface area of all the ways jx promote/jx add app/jx remove app would need to consider. e.g. if we went pure helmfile - how would we find which helmfiles to modify & how could we be sure the pipeline was invoking those files?
  • it is super trivial for users to add a mystuff/helmfile.yaml right now and add an extra helmfile sync in that directory. So the jx-apps.yml does not in any way conflict with someone wanting to go pure helmfile and ignore the jx apps model + promotion stuff

@jstrachan
Copy link
Member Author

@hferentschik on the version stream comment:

So one of the issues with all those values binding files in the current boot is they get copied into the users development environment git repository. They should be versioned somewhere in, say, git and shared across most installations.

Inlining them inside the helmfile would lead to copy/paste of these complex files and lead to all the git rebase/merge/cherry pick issues. So we really want to put them somewhere shared thats not the dev git repository - though leave the ability for folks to add their own custom value files if they want (but most folks don't need to).

The idea of using the version stream is so we have a single git repository that can verify changes to the values files and/or chart versions and/or container images and/or packages and/or quickstarts. So a single, simple place to make changes and verify they still work.

e.g. imagine if a new version of tekton needs a new values binding file. We can make a trivial PR on the version stream and verify both the new version + the new values binding file in 1 PR with the associated BDD tests.

We could look at moving the values file to another git repository; but then we'd have to do 2 PRs on 2 different places that gets more complex. So in terms of making it easier to make changes in a consistent way; using 1 git repository for all of these things feels the best approach for now. But we can review and improve this further if needed.

BTW folks can totally opt out of the version stream completely and version everything by hand in their jx-apps.yml- they can even opt out of that and go pure helmfile too. So we're not closing any doors. But just trying out what feels right for now, getting feedback and continuously improving until we decide the new helmfile/helm3 work is GA.

@jstrachan
Copy link
Member Author

@hferentschik

Last but not least, I am concerned about the fact that when we experiment, we have a tendency to expose things to the user before having done a couple of rounds of refinements. For example, the changes to the version stream are already pushed to master now meaning they will go out to the users. For me, the version stream is part of the “public API” and for a user, it gets confusing what the various pieces of information mean. Maybe the work @abayer is up to can help there.

Totally understood. I'd love a way to mark enhancements/features in the CLI and inside CRDs / jx-requirements.yml / jx-apps.yml files and the like so folks can see what is alpha/beta/GA etc.

We need to be able to incrementally build and release features by following the best practices of the Accelerate book (trunk based development, working in small batches, no long term feature branches etc) - but we also need a way to hide things from users; or at least make it clear which bits are alpha/beta/GA and so forth. Something a little like the gcloud alpha / beta CLI would be nice too

@hferentschik
Copy link

@jstrachan, thanks for all the clarifications. One thing I might have misunderstood is what we do with the generated Helmfiles. I thought they would be after initial generation be part of version control, but that's not the case, right? The Helmfile is generated on the fly and just used temporarily. Only jx-apps.yaml is under version control, right?

so if someone really really wanted to go pure helmfile they could and boot would just work; its just the promote/add app/remove app/validation/tooling would all struggle.

I guess that would still worry me. I don't really think that its either jx-apps.yaml or Helmfile. Not if one wants a fully working solution all the way down. I think we are (and need to) commit to one approach.

@jstrachan
Copy link
Member Author

@hferentschik totally agree, I'd prefer the one approach. Right now the generated helmfiles are never checked in so they are an invisible implementation detail to end users really (unless someone is working underneath the covers). Also if we move towards a future where the boot pipeline only runs server side; you'd never see the generated helmfiles on your local laptop (you'd have to kubectl exec into a running pod to see them). So hopefully we can treat them as an implementation detail and not something end users would ever need to see or worry about.

@hferentschik
Copy link

So one of the issues with all those values binding files in the current boot is they get copied into the users development environment git repository. They should be versioned somewhere in, say, git and shared across most installations.

Is it really such a bad thing that they are copied. I don't have a problem with that. It's transparent at least whereas your approach has several levels of indirection which makes it hard(er) to reason about things or even to understand how things fit together. It is a trade-off really.

Inlining them inside the helmfile would lead to copy/paste of these complex files and lead to all the git rebase/merge/cherry pick issues.

I think we could get this sorted. I was also wondering how having the value bindings in the version stream affects the upgrade process. Even though not ideal, at least conceptually we are able to detect conflicts and get the user to resolve them. If the value bindings are in the version stream and all the upgrade does is advance the version stream ref, we have no way of telling whether this would conflict with any potential local changes/overrides the user made. WDYT?

We could look at moving the values file to another git repository;

I don't think this is a great solution either. I guess version stream or inlining are the real options here.

But we can review and improve this further if needed.

I just wish we would contain this initial review and improve cycles to a smaller set of people (eg the core contributors) opposed to releasing each out into the wild.

@hferentschik
Copy link

folks can totally opt out of the version stream completely and version everything by hand in their jx-apps.yml- they can even opt out of that and go pure helmfile too

Which is partly worrying me as well. One problem I think we are facing is that we always say that you can also X instead of Y etc. Suddenly you have to support and maintain a multiple of different approaches.

@hferentschik
Copy link

Also if we move towards a future where the boot pipeline only runs server side; you'd never see the generated helmfiles on your local laptop (you'd have to kubectl exec into a running pod to see them)

If we were to treat Helmfile as implementation details would we also not mention any Helmfile specifics in the docs? So jx-apps.yaml is our "API" and Helmfile an implementation detail? That would basically be one of the two possible approaches I have been mentioning earlier.

@hferentschik
Copy link

One thing I like about the generated Helmfile is that it makes explicit how things fir together in terms of Chart + value binding + secrets, etc. jx-apps.yaml hides that again and leaves me again scratching my head on where and how things are getting expanded and combined.

@daveconde
Copy link
Contributor

My concern at the moment is that we do not have a way to upgrade users from a boot based cluster to a boot & helmfile based cluster. Ideally a user would not notice the change and it would just happen as part of jx upgrade boot I think until we have the upgrade ability we need to make it really obvious to users that the functionality is experimental, subject to further change and should only be used taking the above in to account.

@jstrachan
Copy link
Member Author

So one of the issues with all those values binding files in the current boot is they get copied into the users development environment git repository. They should be versioned somewhere in, say, git and shared across most installations.

Is it really such a bad thing that they are copied. I don't have a problem with that. It's transparent at least whereas your approach has several levels of indirection which makes it hard(er) to reason about things or even to understand how things fit together. It is a trade-off really.

Yes it is. The reason I didn't want to copy them from the boot config into the git repository of every Dev / Staging / Production git repository that anyone ever creates is that just introduces a maintenance nightmare. Every time an upstream chart needs its binding changed (e.g. a change/new values.yaml fbit of config) or we introduce a new bit of config, add a new field to jx-requirements.yml that, say, tekton needs or lighthouse or whatever, we now have to do the git rebase/cherry pick dance to try upgrade everyones git repositories to merge changes in the values bindings files.

The current design in the enhancement proposal leaves all the system binding files in the version stream and the only source code that goes into each Dev / Staging / Production git repository is all 100% local configuration - so no copy/paste ever needed nor any git rebase/cherry pick/merge hell needs to be forced on our end users.

Inlining them inside the helmfile would lead to copy/paste of these complex files and lead to all the git rebase/merge/cherry pick issues.

I think we could get this sorted. I was also wondering how having the value bindings in the version stream affects the upgrade process.

it lets us update the chart versions / images and binding files consistently in 1 PR - thats good right?

Even though not ideal, at least conceptually we are able to detect conflicts and get the user to resolve them. If the value bindings are in the version stream and all the upgrade does is advance the version stream ref, we have no way of telling whether this would conflict with any potential local changes/overrides the user made. WDYT?

I think you've missed my point. We don't want users git merging complex YAML files - we want to make it really easy to reliably upgrade/downgrade Jenkins X by mere mortals - or simple tooling that works reliably.

But we can review and improve this further if needed.

I just wish we would contain this initial review and improve cycles to a smaller set of people (eg the core contributors) opposed to releasing each out into the wild.

Its really hard to get feedback in an open source project without doing things in public. Thats kinda the point of OSS :)

Though it would be nice to be able to mark commands / CLI arguments / jx-requirements.yml flags as experimental / alpha / beta / GA so its easier to experiment upstream without worrying about affecting beta/GA stuff .e.g. like gcloud alpha ...

@jstrachan
Copy link
Member Author

folks can totally opt out of the version stream completely and version everything by hand in their jx-apps.yml- they can even opt out of that and go pure helmfile too

Which is partly worrying me as well. One problem I think we are facing is that we always say that you can also X instead of Y etc. Suddenly you have to support and maintain a multiple of different approaches.

Given that all the commands like jx upgrade boot / jx add app / jx get app / jx delete app / jx promote / jx boot all need to be modified to use a single, simple YAML file - I think we should enforce the single simple jx-apps.yml file really as part of the contract of the source of Jenkins X GitOps.

If folks want to go off plan and do their own additional pure helmfile stuff in the pipeline, then fine, they are on their own - they don't get promote / boot upgrades / version streams / apps behaviour on those custom helmfile things - but maybe thats OK for some users.

@jstrachan
Copy link
Member Author

Also if we move towards a future where the boot pipeline only runs server side; you'd never see the generated helmfiles on your local laptop (you'd have to kubectl exec into a running pod to see them)

If we were to treat Helmfile as implementation details would we also not mention any Helmfile specifics in the docs? So jx-apps.yaml is our "API" and Helmfile an implementation detail? That would basically be one of the two possible approaches I have been mentioning earlier.

Yes - jx-apps.yml is our API for end users + tooling (promote / apps framework / tools for batch promotion etc).

End users adding/removing apps or promoting apps and configuring local or remote Staging/Production environments don't need to know anything about helmfile; so there's no real need to document it there. We've always used the same approach in Jenkins X. An end user creating quickstarts that deploy to kubernetes don't need to know about kubectl / helm ether.

A good example of this approach in action is helm charts themselves - they describes what you need to put in your values.yaml as a simple end user contract - it doesn't need to describe all the complexities of what goes in the templates folder, the k8s resource YAML format or the go templating complexities.

The Jenkins X Contributors guide can peel back the curtain and go into graphic detail of how to debug apps via helmfile + the complexities of helmfiles' values files injection with paths + wildcards + templating, inheritence and all the like - but thats for contributors + app developers; not end users building cloud native apps to deploy to kubernetes.

@jstrachan
Copy link
Member Author

jstrachan commented Feb 13, 2020

One thing I like about the generated Helmfile is that it makes explicit how things fir together in terms of Chart + value binding + secrets, etc. jx-apps.yaml hides that again and leaves me again scratching my head on where and how things are getting expanded and combined.

there's a massive difference between:

  1. providing a simple declarative UX to end users trying to get their apps of different versions deployed to different environments and for folks to build tools on top (e.g. bulk promote for helmfile #6674)
  2. showing a contributor how all this stuff actually works to help them understand the implementation

We don't have to break (1) to fix (2). e.g. look at tools like helm / terraform with the template / explain commands (or databases with explain statements).

i.e. if a developer wants to see how all this stuff works we can have a boot command to explain how things all work to help them.

As we've already mentioned in #6672 - it would be awesome for each PR on any Environment git repository using the new helmfile / helm 3 stuff to generate a PR comment with the diff (created via helmfile diff) of what would happen if the PR merges in terms of changes to k8s resources (obviously masking secrets) - which is actually more helpful than knowing what the generated helmfile stuff looks like TBH. But if it helms that generated report could also include the helmfiles. Or folks could just run jx boot explain or something?

@polothy
Copy link

polothy commented Feb 19, 2020

@jstrachan are you able to provide an update on the status on this PR and #6664? Is this a feature we can start playing around with or not quite yet?

@jstrachan
Copy link
Member Author

jstrachan commented Mar 2, 2020

I'm gonna close this PR for now - we're using this approach for folks to test out helm3 / helmfile: https://jenkins-x.io/docs/labs/boot/

@jstrachan jstrachan closed this Mar 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants