Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: New Karpenter sub-module for easily enabling Karpenter on EKS #2303

Merged

Conversation

bryantbiggs
Copy link
Member

Description

  • Add new sub-module for creating the AWS resources required by Karpenter
  • Update examples and references to use latest supported version of EKS (1.24)
  • Add new output cluster_name

Motivation and Context

  • Karpenter is quickly becoming the defacto data plane manager. While its not one of the compute categories today (EKS/Self managed node groups, Fargate profiles), it is used in that manner where all of the facets of lifecycle management, scaling, etc., of the data plane are handled by Karpenter. This sub-module makes that experience a bit more polished for users and easier to setup. They now have the cluster, the Karpenter resources, and will only need to provide the provisioner (via Helm, etc.)
  • The output addition for cluster_name is to start differentiating between cluster ID and cluster name. Today the cluster ID output is the cluster name, in the future, the provider API for this may change and so we'll start following that behavior sooner

Breaking Changes

  • No; these are only additions

How Has This Been Tested?

  • I have updated at least one of the examples/* to demonstrate and validate my change(s)
  • I have tested and validated these changes using one or more of the provided examples/* projects
  • I have executed pre-commit run -a on my pull request

Copy link
Member

@antonbabenko antonbabenko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks pretty good (I didn't run it).

modules/karpenter/README.md Outdated Show resolved Hide resolved
@bryantbiggs
Copy link
Member Author

hmm, is GitHub having a moment?! 😅

@bryantbiggs
Copy link
Member Author

@antonbabenko is the WIP bot back? or did it not get removed from this repo (saw it show up in the initial checks then disappear)

@antonbabenko
Copy link
Member

I think we have had the WIP bot for ages. I just double-checked it in other repositories.

@bryantbiggs bryantbiggs merged commit f24de33 into terraform-aws-modules:master Nov 21, 2022
@bryantbiggs bryantbiggs deleted the feat/karpenter-sub-module branch November 21, 2022 18:50
antonbabenko pushed a commit that referenced this pull request Nov 21, 2022
## [18.31.0](v18.30.3...v18.31.0) (2022-11-21)

### Features

* New Karpenter sub-module for easily enabling Karpenter on EKS ([#2303](#2303)) ([f24de33](f24de33))
@antonbabenko
Copy link
Member

This PR is included in version 18.31.0 🎉

@@ -60,7 +61,7 @@ module "eks" {
version = "~> 18.0"

cluster_name = "my-cluster"
cluster_version = "1.22"
cluster_version = "1.24"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the future, can we do these sort of changes in a different PR?
a card that mentions karpenter, and all the sudden there is a major change of the cluster version, is not what anyone would expect

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is just to the readme so probably not a big deal

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR description contains all the necessary details, it just seems some have not taken the time to read it 😉

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sorry. I seem to not have notifications for PR, only releases and missed this PR when it was initially opened. And when I saw the release email, I clicked in the commit id rather than open the PR so never got to read the description.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all good! I do try to put the relevant bits in the PR body with associated links as best as I can. its not always perfect though

@@ -10,13 +10,13 @@ provider "kubernetes" {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id]
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the reasoning for this change?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems more correct to pass in the module.eks.cluster_name instead of cluster_id attribute to --cluster-name, no?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see the PR changes for background and reasoning. more context can be found here hashicorp/terraform-provider-aws#27560

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for explaining. Will modify internally to match

@FernandoMiguel
Copy link
Contributor

I'm rather confused why this karpenter module is being added to the EKS Cluster repo, instead of the terraform-aws-eks-blueprints project.
seems duplicated effort to maintain two locations, and a feature set that not everyone expects there (sure, node group and fargate are here too.....)

@bryantbiggs
Copy link
Member Author

I'm rather confused why this karpenter module is being added to the EKS Cluster repo, instead of the terraform-aws-eks-blueprints project. seems duplicated effort to maintain two locations, and a feature set that not everyone expects there (sure, node group and fargate are here too.....)

I'm not sure where the confusion is - they are two separate projects aimed at two entirely different use cases. this module provides (nearly) all of the AWS infrastructure required for an EKS cluster. The EKS Blueprints project states EKS Blueprints helps you compose complete EKS clusters that are fully bootstrapped with the operational software that is needed to deploy and operate workloads. The EKS Blueprints project utilizes this module and provides additional software on top to move past just a set of infrastructure resources plus a number of examples on how various clusters can be confiugred

As I stated in the PR description, Karpenter is quickly becoming the entity that manages the entire lifecycle of the data plane - where today we have EKS managed node groups, self-managed node groups, and Fargate profiles. It is only natural to add this functionality in here to aid in the process of running Karpenter now that more AWS infrastructure resources are required to support Karpenter

@bryantbiggs
Copy link
Member Author

To state a bit differently, now in hindsight, with the changes made in Karpenter v0.19.0 - Karpenter would no longer have a place here https://github.com/terraform-aws-modules/terraform-aws-iam/tree/master/modules/iam-role-for-service-accounts-eks and would only have its AWS resources defined here.

The actual provisioning of the Karpenter controller and CRD is outside the scope of what I intend for this module. Maybe someday soon I can remove the Kubernetes provider and the aws-auth configmap and we can return to a more "pure" module that is only interacting with the AWS API. The actual provisioning of Karpenter is left up to users since there are numerous ways that folks manage this deployment and lifecycle today (manually (it happens), via Helm, via ArgoCD, via FluxCD, via other SDKS, etc. )

@williamsun-hha
Copy link

Apologize first if I post this in the wrong place. I am able to follow the example

(https://github.com/ABOUTYOUGMBH/terraform-aws-eks/tree/v18.29.0/examples)/[karpenter]

to get the Karpenter installed with EKS. But the terraform stops at

Workaround - hashicorp/terraform-provider-kubernetes#1380 (comment)

resource "kubectl_manifest" "karpenter_provisioner" {
yaml_body = <<-YAML
apiVersion: karpenter.sh/

And I get the following error,

kubectl_manifest.karpenter_provisioner: Creating...
kubectl_manifest.karpenter_provisioner: Still creating... [10s elapsed]
kubectl_manifest.karpenter_provisioner: Still creating... [20s elapsed]
kubectl_manifest.karpenter_provisioner: Still creating... [30s elapsed]
kubectl_manifest.karpenter_provisioner: Still creating... [40s elapsed]

│ Error: default failed to create kubernetes rest client for update of resource: Unauthorized

│ with kubectl_manifest.karpenter_provisioner,
│ on main.tf line 272, in resource "kubectl_manifest" "karpenter_provisioner":
│ 272: resource "kubectl_manifest" "karpenter_provisioner" {

I am not sure what the correct work around is. Can you please point me to the right direction so I can test the Karpenter?

Thank you very much in advance for your help!

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 23, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants