-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
we need the abilty for setting custom cni networking upon creating a cluster #1822
Comments
users have the ability to use different container runtimes like containerd, but this module does not interact with the Kubernetes API outside of what is exposed through the EKS service. I don't think this is a question for this module though |
@shaibs3 I agree with @bryantbiggs that these kind of "addons" are out of scope of this module and upon personal choice of the user. however as an idea: provision AWS-CNI with helm chart, either individually or use helm terraform provider. then you can glue the full cluster creation together with a little script that creates EKS with TF, provisions CNI with helm, scales down ASG to 0 and up again to 1. |
Yes, we are aware of the way to enable such thing with scripting. We are heavily investing in GitOps and we prefer to be able to declare such configuration and not writing imperative code that does this. |
I'm open to hearing about what this might look like, but I'll be honest - I don't see how it fits here or within Terraform at all |
I'm interested in seeing improvements here too tbh; we use CalicoCNI exclusively in our EKS clusters, and without running imperative code as mentioned above, our options are:
For context, we're using Terragrunt to support calling these modules directly with 'inputs' and passing dependant outputs across modules. This allows us to use the modules natively without needing to write our own monolithic wrapper modules. I'm not really sure of the best solution here yet, but it really isn't a clean-to-implement use case, that's for sure - It's a shame the AWS CNI is so limited, as it'd save me a tonne of time; who knows; maybe they'll add 3rd parties to their addons later. As a thought, maybe, moving the |
This is already supported today - users can provision a full cluster in a single module instantiation or they can separate out control plane from node groups https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/modules |
Sorry, I specifically meant being able to define a map of multiple nodegroups, as using the module directly only accepts a single node group definition at a time. |
I still don't follow - the module accepts a map of n-number of node groups where node groups can be EKS managed node groups, self managed node groups, or fargate profiles. You can even have multiple maps of all 3 types together - please see the examples provided |
The parent module does; the node-group modules do not? |
you can do a |
Ok I'll stop hijacking this thread now lol, but yes that would work; but it would also need me to pass all the vars the module needs through another module which defines the for_each; meaning I have to basically copy the vars and module code from the main module, and then update that whenever it changes too. |
closing this for now - this module does not mange cluster internals which are required to enable custom CNI networking |
This module does manage one cluster internal: the aws-auth ConfigMap. And it uses the kubernetes provider to do it. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
i am using the terrafrom module to create the eks cluster.
after the cluster is up and running i am following the instructions in https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html
to create cni custom networking.
however this is a big downside due to the fact that after configuring all the steps, i need to drain my instances belonging to the asg so that the netwoprking configurations will take place on the new nodes.
what i need is an ability to configure all those steps during the eks cluster creation so that the nodes will be up and running with the new custom networking
The text was updated successfully, but these errors were encountered: