Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace yaml examples with the stable helm chart #1496

Closed
aledbf opened this issue Oct 7, 2017 · 7 comments · Fixed by #1527
Closed

Replace yaml examples with the stable helm chart #1496

aledbf opened this issue Oct 7, 2017 · 7 comments · Fixed by #1527

Comments

@aledbf
Copy link
Member

aledbf commented Oct 7, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

FEATURE REQUEST

Right now the examples are not clear, some of them do not uses RBAC and use the kube-system namespace. Using the helm chart we can provide a better user experience and avoid unnecessary duplication

@justinsb
Copy link
Member

justinsb commented Oct 9, 2017

It's great to add a helm chart, but helm charts are not great documentation, so if you could maintain a canonical manifest that would be much appreciated for people not using helm!

@aledbf
Copy link
Member Author

aledbf commented Oct 9, 2017

@justinsb right, the issue here is that we have too many examples, duplications, no use of RBAC, etc. Do you have another idea how can we change this situation?

Edit: I do not use helm but it seems it is the "standard" way to install and configure apps

@justinsb
Copy link
Member

So we maintain two configurations in the kops repo - one for GCE and one for AWS (https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx). I think it would make a lot of more sense for ingress (and each project) to have its own canonical manifest - it really doesn't belong in kops!

It would also make life easier for helm developers, who would then able to consume that canonical manifest, but also it can be consumed by other tools e.g. the kubernetes addon manager (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons).

If you want to maintain a helm chart as well, that's great (though that probably belongs in the helm repository). And we heard feedback loud and clear from kube-up users that manifests declared in a templating language - be it jinja or golang templates - are not a good way to communicate how something should be installed.

There's also a proposal for a new declarative application management approach based directly on the kubernetes API cc @bgrant0607 . So in theory we could have a base manifest and then a set of patch files for various options. I'd like to experiment with that option a bit more - if you look at the gce and aws examples, they are basically identical except for the proxy-protocol annotations on AWS. As I understand it, the proposal would let us express the AWS specific changes as a patch (in the format that kubectl patch expects), so that we would be using the kubernetes API and the differences would be short and obvious. If we design our API well, the patch files should also be composable, which is the problem with helm templates - once you end up with more than a few permutations it comes unreadable. (And it's also the problem with the kops approach - static files break down entirely with just a handful of options)

@aledbf
Copy link
Member Author

aledbf commented Oct 10, 2017

So in theory we could have a base manifest and then a set of patch files for various options. I'd like to experiment with that option a bit more

That sounds interesting :)
Do you have a link for that proposal or an example?

And we heard feedback loud and clear from kube-up users that manifests declared in a templating language - be it jinja or golang templates - are not a good way to communicate how something should be installed.

I agree. This is what I have in a branch for the nginx-ingress stable chart and this gets unreadable really fast. This is just an example of the service:

apiVersion: v1
kind: Service
metadata:
  annotations:
{{- if .Values.controller.service.annotations }}
{{ toYaml .Values.controller.service.annotations | indent 4 }}
{{- end }}
{{- if (eq .Values.provider "aws") }}
{{- if and .Values.aws.tlsTermination (not (empty .Values.aws.certificate)) }}
    # replace with the correct value of the generated certifcate in the AWS console
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.aws.certificate }}
    # the backend instances are HTTP
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    # Map port 443
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
{{- elif .Values.aws.useProyProtocol }}
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
{{- end }}
{{- else if or (eq .Values.provider "gce") (eq .Values.provider "gke") }}
    service.beta.kubernetes.io/external-traffic: OnlyLocal
{{- else if or (eq .Values.provider "azure") }}    
    service.beta.kubernetes.io/external-traffic: OnlyLocal
{{- end }}
.....

@justinsb
Copy link
Member

The original doc is here: https://docs.google.com/document/d/1cLPGweVEYrVqQvBLJg6sxV-TrE5Rm2MNOBA_cxZP2WU/edit#

There's some discussion of trying to form a working group around it: https://groups.google.com/d/msg/kubernetes-sig-apps/q7XlyRSLz5M/Oq1Y56oaAQAJ

Personally I've been experimenting along with the MVP of this which is to have the "official" manifest, and then instead of changing it directly, creating a patch file instead:

> cat patches/service/ingress-nginx.yaml 
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "600"
  name: ingress-nginx
  namespace: kube-ingress

which is then applied with:

kubectl patch -n kube-ingress service/ingress-nginx -p "$(cat patches/service/ingress-nginx.yaml)"

It rapidly becomes clear that the existing kubectl needs some tweaks to make this work well, which is I think why sig-cli is driving a lot of this, but I believe the tweaks to be relatively minor.

@kfox1111
Copy link

I've very rarely been able to launch any pre-existing raw kubernetes resources without having to do at least some changes. But usually take more then one. NodeSelectors, annotations, resource requirements. For example, even flannel out of the box fails, as it has a network space that conflicts with our corporate network, needs a node selector to go where it needs to, and needs customization as to which nic it will land on. Plus I need multiple instances of the resources, one per class of hardware.

Ingress is in that same boat for me. I need a NodeSelector to land it on our ingress hosts. A configuration option for making it Deployments vs DaemonSets. A configurable to say what IngressClass its listening on, and multi instance support since I usually have more then one IngressClass (associated with different networks) running at a time.

While I do like the patch idea for building more modular objects, there still are things that I'm not sure will patch well. like the subnet example above. Its much cleaner as a chart.

Maybe part of the solution is adding patch support to helm? I think that may get rid of a lot of the conditionals that are in charts today? If the idea of charts is sound, but the implementation is kind of ugly, maybe we can fix it.

@aledbf
Copy link
Member Author

aledbf commented Oct 13, 2017

Closing. For now we will use the patch approach. We can review this in the future.

@aledbf aledbf closed this as completed Oct 13, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants