-
Notifications
You must be signed in to change notification settings - Fork 637
🌱 feat: implements nodeadm bootstrapping type #5700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
🌱 feat: implements nodeadm bootstrapping type #5700
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
testing with this manifest apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: default
spec:
clusterNetwork:
pods:
cidrBlocks:
- 192.168.0.0/16
services:
cidrBlocks:
- 10.96.0.0/12
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: AWSManagedControlPlane
name: default-control-plane
infrastructureRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: AWSManagedControlPlane
name: default-control-plane
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: AWSManagedControlPlane
metadata:
name: default-control-plane
spec:
addons:
- name: kube-proxy
version: v1.32.0-eksbuild.2
network:
cni:
cniIngressRules:
- description: kube-proxy metrics
fromPort: 10249
protocol: tcp
toPort: 10249
- description: NVIDIA Data Center GPU Manager metrics
fromPort: 9400
protocol: tcp
toPort: 9400
- description: Prometheus node exporter metrics
fromPort: 9100
protocol: tcp
toPort: 9100
region: us-west-2
sshKeyName: ""
version: v1.33.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
metadata:
name: default
spec:
template:
spec:
cloudInit:
insecureSkipSecretsManager: true
ami:
eksLookupType: AmazonLinux2023
instanceMetadataOptions:
httpTokens: required
httpPutResponseHopLimit: 2
iamInstanceProfile: nodes.cluster-api-provider-aws.sigs.k8s.io
instanceType: m5a.16xlarge
rootVolume:
size: 80
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: NodeadmConfigTemplate
metadata:
name: default
spec:
template:
spec: {}
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: default
spec:
clusterName: default
replicas: 1
template:
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: NodeadmConfigTemplate
name: default
clusterName: default
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSMachineTemplate
name: default
version: v1.33.0 |
|
/retest |
2 similar comments
|
/retest |
|
/retest |
8f854bd to
81f3664
Compare
|
/test ? |
|
@faiq: The following commands are available to trigger required jobs: The following commands are available to trigger optional jobs: Use In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/test /test pull-cluster-api-provider-aws-e2e-eks |
|
Does this work with AWSManagedMachinePool ? |
81f3664 to
59ecae0
Compare
|
@dsanders1234 try this apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
name: default
spec:
clusterName: default
template:
spec:
bootstrap:
#dataSecretName: ""
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: NodeadmConfig
name: default
clusterName: default
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSManagedMachinePool
name: default
version: v1.33.0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
kind: AWSManagedMachinePool
metadata:
name: default
spec:
roleName: "nodes.cluster-api-provider-aws.sigs.k8s.io"
scaling:
minSize: 1
maxSize: 3
amiType: CUSTOM
awsLaunchTemplate:
ami:
eksLookupType: AmazonLinux2023
instanceMetadataOptions:
httpTokens: required
httpPutResponseHopLimit: 2
instanceType: "m5a.16xlarge"
rootVolume:
size: 80
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: NodeadmConfig
metadata:
name: default
spec:
kubelet:
config:
evictionHard:
memory.available: "2000Mi" |
|
/test pull-cluster-api-provider-aws-e2e-eks |
10503ae to
476eb43
Compare
|
/test pull-cluster-api-provider-aws-e2e-eks |
|
/retest |
3 similar comments
|
/retest |
|
/retest |
|
/retest |
476eb43 to
234d905
Compare
|
/retest |
3 similar comments
|
/retest |
|
/retest |
|
/retest |
9282218 to
6b78f52
Compare
|
/test pull-cluster-api-provider-aws-e2e-eks |
6b78f52 to
81960e5
Compare
|
/test pull-cluster-api-provider-aws-e2e-eks |
|
@faiq Thanks for this! 🙌 I tested EKS with the MP + Launch Template + AL2023 + NodeadmConfig and it worked as expected. Also, is there a tentative merge timeline or any remaining blockers. |
thats strange. i feel like this is a bug in the machinepool controller. shouldn't it be setting the owner ref? |
81960e5 to
a10f273
Compare
|
/test pull-cluster-api-provider-aws-e2e-eks |
1 similar comment
|
/test pull-cluster-api-provider-aws-e2e-eks |
ac77773 to
a4a638e
Compare
|
/test pull-cluster-api-provider-aws-e2e-eks |
a4a638e to
6f464e0
Compare
|
/test pull-cluster-api-provider-aws-e2e-eks |
6f464e0 to
e89afb2
Compare
|
/test pull-cluster-api-provider-aws-e2e-eks |
e89afb2 to
a9bf155
Compare
|
/test pull-cluster-api-provider-aws-e2e-eks |
a9bf155 to
d657f69
Compare
|
/test pull-cluster-api-provider-aws-e2e-eks |
|
The last test failed in deleting the aws managed control plane, which is entirely unrelated to these changes. EDITS: i'll run the tests once again to get that sweet ✔️ |
|
/test pull-cluster-api-provider-aws-e2e-eks |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This PR implements the nodeadm config type outlined by the KEP #5678
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
Checklist:
Release note: