Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automate tests to upgrade individual charts #25

Closed
scottrigby opened this issue Sep 18, 2018 · 3 comments · Fixed by #103
Closed

Automate tests to upgrade individual charts #25

scottrigby opened this issue Sep 18, 2018 · 3 comments · Fixed by #103

Comments

@scottrigby
Copy link
Member

Community charts were hit with a set of unexpected upgrade bugs. We can speculate about why that didn't come to our attention before, but this also could have been caught (and will catch future issues) by adding automated upgrade tests.

See:

@scottrigby scottrigby changed the title Automate Upgrade tests Automate tests to upgrade individual charts Sep 18, 2018
@jlegrone
Copy link
Member

jlegrone commented Dec 14, 2018

@scottrigby I was thinking about picking this up and just want to make sure I've got a handle on the expected logic.

If chart version bump is not allowed to be a breaking change according to Semver then...

  1. Install releases for previous version of chart (one release per ci/*-values.yaml file)
  2. If installation fails for previous version of the chart, ignore that release (we don't want to block fixes to trunk)
  3. For each release under test, run helm upgrade --reuse-values against the latest chart version

For step 2, I assume we should avoid passing in the latest version of the ci/*-values.yaml file? Doing this would potentially cover up breaking changes to the chart's values schema. Instead we should continue testing ci values files on fresh installs so that test cases can be added/amended/removed.

cc @unguiculus

@desaintmartin
Copy link

There are two possibilities here :

  • Try to upgrade from published version to version from PR
  • Do static analysis to enforce good practices (maybe from helm template and some clever grepping)

Ideally, both would be nice to have.

Just for the record, for example, there are currently many charts that lack a matchLabels (although it does not always trigger break upgrade, it tends to be error prone if those deployments/statefulsets are upgraded to apps/v1 without paying attention) :

find . -type f -name \*deployment\* -print0 | xargs -0 grep -L "    matchLabels:" | wc -l
      92

@scottrigby
Copy link
Member Author

scottrigby commented Feb 11, 2019

I think static analysis should be separate issue(s) from this, which is about trying to upgrade to each new (non-MAJOR) release, from each previous MINOR release. The goal is to catch breaking changes that should instead require a MAJOR version bump. We can't test every combination of values, so this may miss something, but we could at least check the default values.

PS, I do think #19, #99, and #100 label static analysis issues are important and should be addressed, just trying to clarify scope of each issue 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants