-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace yaml examples with the stable helm chart #1496
Comments
It's great to add a helm chart, but helm charts are not great documentation, so if you could maintain a canonical manifest that would be much appreciated for people not using helm! |
@justinsb right, the issue here is that we have too many examples, duplications, no use of RBAC, etc. Do you have another idea how can we change this situation? Edit: I do not use helm but it seems it is the "standard" way to install and configure apps |
So we maintain two configurations in the kops repo - one for GCE and one for AWS (https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx). I think it would make a lot of more sense for ingress (and each project) to have its own canonical manifest - it really doesn't belong in kops! It would also make life easier for helm developers, who would then able to consume that canonical manifest, but also it can be consumed by other tools e.g. the kubernetes addon manager (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons). If you want to maintain a helm chart as well, that's great (though that probably belongs in the helm repository). And we heard feedback loud and clear from kube-up users that manifests declared in a templating language - be it jinja or golang templates - are not a good way to communicate how something should be installed. There's also a proposal for a new declarative application management approach based directly on the kubernetes API cc @bgrant0607 . So in theory we could have a base manifest and then a set of patch files for various options. I'd like to experiment with that option a bit more - if you look at the gce and aws examples, they are basically identical except for the proxy-protocol annotations on AWS. As I understand it, the proposal would let us express the AWS specific changes as a patch (in the format that kubectl patch expects), so that we would be using the kubernetes API and the differences would be short and obvious. If we design our API well, the patch files should also be composable, which is the problem with helm templates - once you end up with more than a few permutations it comes unreadable. (And it's also the problem with the kops approach - static files break down entirely with just a handful of options) |
That sounds interesting :)
I agree. This is what I have in a branch for the nginx-ingress stable chart and this gets unreadable really fast. This is just an example of the service:
|
The original doc is here: https://docs.google.com/document/d/1cLPGweVEYrVqQvBLJg6sxV-TrE5Rm2MNOBA_cxZP2WU/edit# There's some discussion of trying to form a working group around it: https://groups.google.com/d/msg/kubernetes-sig-apps/q7XlyRSLz5M/Oq1Y56oaAQAJ Personally I've been experimenting along with the MVP of this which is to have the "official" manifest, and then instead of changing it directly, creating a patch file instead:
which is then applied with:
It rapidly becomes clear that the existing kubectl needs some tweaks to make this work well, which is I think why sig-cli is driving a lot of this, but I believe the tweaks to be relatively minor. |
I've very rarely been able to launch any pre-existing raw kubernetes resources without having to do at least some changes. But usually take more then one. NodeSelectors, annotations, resource requirements. For example, even flannel out of the box fails, as it has a network space that conflicts with our corporate network, needs a node selector to go where it needs to, and needs customization as to which nic it will land on. Plus I need multiple instances of the resources, one per class of hardware. Ingress is in that same boat for me. I need a NodeSelector to land it on our ingress hosts. A configuration option for making it Deployments vs DaemonSets. A configurable to say what IngressClass its listening on, and multi instance support since I usually have more then one IngressClass (associated with different networks) running at a time. While I do like the patch idea for building more modular objects, there still are things that I'm not sure will patch well. like the subnet example above. Its much cleaner as a chart. Maybe part of the solution is adding patch support to helm? I think that may get rid of a lot of the conditionals that are in charts today? If the idea of charts is sound, but the implementation is kind of ugly, maybe we can fix it. |
Closing. For now we will use the patch approach. We can review this in the future. |
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
FEATURE REQUEST
Right now the examples are not clear, some of them do not uses RBAC and use the kube-system namespace. Using the helm chart we can provide a better user experience and avoid unnecessary duplication
The text was updated successfully, but these errors were encountered: