-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
have a place to store DeploymentConfigs that are generated by a build before they become active then a way to tag them? #1708
Comments
if its not clear from my comments above - the problem is that DeploymentConfigs change over time as containers are added/removed to pods, labels change, env vars change. Then the images referenced in each DeploymentConfig change too. So you can't really just use 1 DeploymentConfig for a project across all environments (dev, tests, uat, pre-prod, prod) and just change the tags inside per environment. So being able to pair them together with the image tag/version might be better |
Something like helm is looking like a good option for this (i.e. a git repo with a helm like versioned packaging) more background: |
FWIW now openshift 3.6 uses API groups we can use Helm charts to store versioned copies of kubernetes + openshift resources |
I am going to close this issue since it doesn't seem as something that needs to be handled inside the core of Origin now with API groups. |
…service-catalog/' changes from b69b4a6c80..b758460ba7 b758460ba7 origin build: modify hard coded path 871582f73a origin build: add origin tooling 9fa4e70 chart changes for v0.1.8 (openshift#1741) cada49c handle instance deletion that occurs during async provisioning or async update (openshift#1587) (openshift#1708) 3032f01 phony output binaries (openshift#1729) 0c98a72 remove last vestiges of glide (openshift#1696) 8435935 Prune vendor (openshift#1739) 0f657ec allow setting go version, clean up alignment 08af73f Disable test-dep target temporarily 41984a5 Check for existing bindings only for instances with DeprovisionStatus == ServiceInstanceDeprovisionStatusRequired. (openshift#1640) 706e555 chart changes for v0.1.7 (openshift#1721) 23644db we inconsistently rm thing with and without docker (openshift#1713) a38092d Chart changes for Release v0.1.6 (openshift#1718) 2fd4ecf Add PodPreset into settings api group (openshift#1694) bac68f4 update docs of developer's guide (openshift#1716) 3200b16 add integration test for proper async binding retry (openshift#1688) 6d809c3 Add custom columns to OpenAPI schema (openshift#1597) fcdefa6 Workaround spf13/viper stringarray bug (openshift#1700) ebbeb8c undo 6bad71d358ad3ad39eb8c003f5807cca1ec1d1e7 (openshift#1714) 1ee9659 Load all client auth plugins in the cli (openshift#1695) b9ad10d must run tests (openshift#1698) c621cdc add stages to Travis REVERT: b69b4a6c80 origin build: modify hard coded path REVERT: 527fac4d02 origin build: add origin tooling git-subtree-dir: cmd/service-catalog/go/src/github.com/kubernetes-incubator/service-catalog git-subtree-split: b758460ba7a45d370da9d5d634e71c16e9eb282a
in a CI system where we run a build we typically generate a new docker image with a tag each build. We can then use tags on images to choose when that generated image gets deployed into a particular environment.
e.g. a build could make a dev/latest image, then folks can try it out and then label it 'test' to get it auto-deployed to a test environment. So doing a build doesn't always immediately deploy in all environments (though it often does in development; there's usually an approval/CD process for other environments).
In fabric8 land we tend to generate the kubernetes json at build time; we then generate a versioned blob of json for each build. So we use a maven repository to store each versioned kubernetes json etc. Then at some point we may choose to deploy it in an environment.
I wonder if OpenShift needs to work with DeploymentConfigs in a similar way to docker images; that many versions can be generated by a build and added to a registry of DeploymentConfigs (e.g. many versions) and then we can use tags to choose which version of the DeploymentConfig to use in a particular environment.
e.g. for a given environment, we may use the tag "dev" to find the image+deploymentConfig, another environment may use the tag "test" to find the image+deploymentConfig. It'd be nice to use the same mechanism/tag for both DeploymentConfigs and docker container images?
Another approach is we could generate the DeploymentConfig in a fabric8 build and store it in maven and then have some kind of UI / mechanism to choose to apply a version of a DeploymentConfig in an environment. But it feels like this should be a thing in OpenShift?
Or are there other approaches to track different versions of DeploymentConfigs across environments?
The text was updated successfully, but these errors were encountered: