Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

we need the abilty for setting custom cni networking upon creating a cluster #1822

Closed
shaibs3 opened this issue Feb 1, 2022 · 14 comments
Closed

Comments

@shaibs3
Copy link

shaibs3 commented Feb 1, 2022

i am using the terrafrom module to create the eks cluster.
after the cluster is up and running i am following the instructions in https://docs.aws.amazon.com/eks/latest/userguide/cni-custom-network.html
to create cni custom networking.
however this is a big downside due to the fact that after configuring all the steps, i need to drain my instances belonging to the asg so that the netwoprking configurations will take place on the new nodes.
what i need is an ability to configure all those steps during the eks cluster creation so that the nodes will be up and running with the new custom networking

@bryantbiggs
Copy link
Member

users have the ability to use different container runtimes like containerd, but this module does not interact with the Kubernetes API outside of what is exposed through the EKS service. I don't think this is a question for this module though

@philicious
Copy link
Contributor

@shaibs3 I agree with @bryantbiggs that these kind of "addons" are out of scope of this module and upon personal choice of the user.

however as an idea: provision AWS-CNI with helm chart, either individually or use helm terraform provider. then you can glue the full cluster creation together with a little script that creates EKS with TF, provisions CNI with helm, scales down ASG to 0 and up again to 1.
this is unproblematic as in a fresh cluster, there is no workloads to worry about. It only adds like 5mins of extra-time to cluster setup

@ezraroi
Copy link

ezraroi commented Feb 4, 2022

Yes, we are aware of the way to enable such thing with scripting. We are heavily investing in GitOps and we prefer to be able to declare such configuration and not writing imperative code that does this.

@bryantbiggs
Copy link
Member

Yes, we are aware of the way to enable such thing with scripting. We are heavily investing in GitOps and we prefer to be able to declare such configuration and not writing imperative code that does this.

I'm open to hearing about what this might look like, but I'll be honest - I don't see how it fits here or within Terraform at all

@dalgibbard
Copy link

I'm interested in seeing improvements here too tbh; we use CalicoCNI exclusively in our EKS clusters, and without running imperative code as mentioned above, our options are:

  • Copy this entire module to hack in support and maintain the changes onward locally (which is what we were doing for v17, but the changes introduced in v18 were substantial)
  • Deploy the cluster without any node groups, add calico (and EBS KMS key whilst i'm there), deploy Node Groups (with lots of outputs passed across...), deploy aws-auth...

For context, we're using Terragrunt to support calling these modules directly with 'inputs' and passing dependant outputs across modules. This allows us to use the modules natively without needing to write our own monolithic wrapper modules.

I'm not really sure of the best solution here yet, but it really isn't a clean-to-implement use case, that's for sure - It's a shame the AWS CNI is so limited, as it'd save me a tonne of time; who knows; maybe they'll add 3rd parties to their addons later.

As a thought, maybe, moving the nodegroups.tf out to it's own module might make it easier to interface with in a cluster-then-nodegroups deployment style? Otherwise we have to interface with the backend module directly, where it only accepts a single node group definition.

@bryantbiggs
Copy link
Member

bryantbiggs commented Feb 4, 2022

As a thought, maybe, moving the nodegroups.tf out to it's own module might make it easier to interface with in a cluster-then-nodegroups deployment style? Otherwise we have to interface with the backend module directly, where it only accepts a single node group definition.

This is already supported today - users can provision a full cluster in a single module instantiation or they can separate out control plane from node groups https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/modules

@dalgibbard
Copy link

As a thought, maybe, moving the nodegroups.tf out to it's own module might make it easier to interface with in a cluster-then-nodegroups deployment style? Otherwise we have to interface with the backend module directly, where it only accepts a single node group definition.

This is already supported today - users can provision a full cluster in a single module instantiation or they can separate out control plane from node groups https://github.com/terraform-aws-modules/terraform-aws-eks/tree/master/modules

Sorry, I specifically meant being able to define a map of multiple nodegroups, as using the module directly only accepts a single node group definition at a time.
I could wrap it, but then I'm basically copy/pasting the code that is in the main module.

@bryantbiggs
Copy link
Member

Sorry, I specifically meant being able to define a map of multiple nodegroups, as using the module directly only accepts a single node group definition at a time. I could wrap it, but then I'm basically copy/pasting the code that is in the main module.

I still don't follow - the module accepts a map of n-number of node groups where node groups can be EKS managed node groups, self managed node groups, or fargate profiles. You can even have multiple maps of all 3 types together - please see the examples provided

@dalgibbard
Copy link

The parent module does; the node-group modules do not?
Unless I'm missing something here

@bryantbiggs
Copy link
Member

you can do a for_each in a module so if you are using the eks-manged-node-group sub-module directly, just throw a for_each in the definition and pass your map of node group definitions

@dalgibbard
Copy link

Ok I'll stop hijacking this thread now lol, but yes that would work; but it would also need me to pass all the vars the module needs through another module which defines the for_each; meaning I have to basically copy the vars and module code from the main module, and then update that whenever it changes too.

@bryantbiggs
Copy link
Member

closing this for now - this module does not mange cluster internals which are required to enable custom CNI networking

@tulanian
Copy link

This module does manage one cluster internal: the aws-auth ConfigMap. And it uses the kubernetes provider to do it.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 28, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants