Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

packages: Update cni and cni-plugin packages #2842

Merged
merged 1 commit into from
Mar 9, 2023

Conversation

stmcginnis
Copy link
Contributor

@stmcginnis stmcginnis commented Feb 28, 2023

Issue number:

Closes: #1745

Description of changes:

This updates the cni and cni-plugin packages in Bottlerocket.

Most of the time, the CNI plugins that are used are actually pulled down via EKS. There are some CNIs, and perhaps metal and vmware variants, where they need to use the locally installed CNI binaries. This gets those local binaries up to date with the binaries that end up getting pulled from EKS.

The flannel CNI plugin was deprecated and has since been removed. The dummy plugin has been introduced and can be used when chained with other CNIs for local routing.

Testing done:

Testing in progress:

  • Build and test aws-k8s-1.24 for basic functionality
  • Deploy aws-k8s-* variant and verify a more complex CNI like Cilium is functional
$ cilium connectivity test
........

✅ All 32 tests (228 actions) successful, 0 tests skipped, 1 scenarios skipped.
$ kubectl get pods -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
cilium-jr94q                      1/1     Running   0          30m
cilium-operator-fc8765957-j4xfb   1/1     Running   0          35m
cilium-operator-fc8765957-xnj2x   1/1     Running   0          35m
cilium-qdjqp                      1/1     Running   0          30m
coredns-5c5677bc78-cvwmh          1/1     Running   0          43m
coredns-5c5677bc78-l9zfh          1/1     Running   0          43m
kube-proxy-cbhn2                  1/1     Running   0          30m
kube-proxy-n8qgr                  1/1     Running   0          30m
$ cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:          OK
 \__/¯¯\__/    Operator:        OK
 /¯¯\__/¯¯\    Hubble Relay:    disabled
 \__/¯¯\__/    ClusterMesh:     disabled
    \__/

Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet         cilium             Desired: 2, Ready: 2/2, Available: 2/2
Containers:       cilium             Running: 2
                  cilium-operator    Running: 2
Cluster Pods:     6/6 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.13.0@sha256:6544a3441b086a2e09005d3e21d1a4afb216fae19c5a60b35793c8a9438f8f68: 2
                  cilium-operator    quay.io/cilium/operator-aws:v1.13.0@sha256:3cc9ff5bcc57f536427e7059abc916831b368654dfddcbad8a412731984a95e4: 2
  • vmware-k8s-1.24 with cilium
$ kubectl --kubeconfig br-eksa-124-eks-a-cluster.kubeconfig get all -A
NAMESPACE                           NAME                                                                 READY   STATUS    RESTARTS        AGE
capi-kubeadm-bootstrap-system       pod/capi-kubeadm-bootstrap-controller-manager-6b5b855b54-d72ll       1/1     Running   0               3m49s
capi-kubeadm-control-plane-system   pod/capi-kubeadm-control-plane-controller-manager-79cc69f86c-hcmzd   1/1     Running   0               3m32s
capi-system                         pod/capi-controller-manager-7f84fc79cc-ppzt7                         1/1     Running   0               3m53s
capv-system                         pod/capv-controller-manager-df774d599-qxcvn                          1/1     Running   0               3m27s
cert-manager                        pod/cert-manager-76c4fc967-7wmxv                                     1/1     Running   0               4m9s
cert-manager                        pod/cert-manager-cainjector-5bbdf98444-rgmj4                         1/1     Running   0               4m9s
cert-manager                        pod/cert-manager-webhook-5b966777df-24tqf                            1/1     Running   0               4m9s
eksa-packages                       pod/eks-anywhere-packages-99446dc64-wndjn                            1/1     Running   0               85s
eksa-system                         pod/eksa-controller-manager-6b969b586b-s8qgz                         1/1     Running   0               116s
etcdadm-bootstrap-provider-system   pod/etcdadm-bootstrap-provider-controller-manager-bf77c4666-n9czg    1/1     Running   0               3m39s
etcdadm-controller-system           pod/etcdadm-controller-controller-manager-6dbb4b78-4xfmt             1/1     Running   0               3m36s
kube-system                         pod/cilium-ck4b4                                                     1/1     Running   0               4m53s
kube-system                         pod/cilium-nxcl4                                                     1/1     Running   0               5m45s
kube-system                         pod/cilium-operator-6c8bfc95dd-5x448                                 1/1     Running   0               5m45s
kube-system                         pod/cilium-operator-6c8bfc95dd-blwkr                                 1/1     Running   0               5m45s
kube-system                         pod/cilium-rnvzd                                                     1/1     Running   0               5m2s
kube-system                         pod/cilium-w7ndl                                                     1/1     Running   0               4m50s
kube-system                         pod/coredns-745b844d56-68lzl                                         1/1     Running   0               5m54s
kube-system                         pod/coredns-745b844d56-rc4xj                                         1/1     Running   0               5m54s
kube-system                         pod/kube-apiserver-198.19.131.134                                    1/1     Running   0               6m10s
kube-system                         pod/kube-apiserver-198.19.16.112                                     1/1     Running   0               4m43s
kube-system                         pod/kube-controller-manager-198.19.131.134                           1/1     Running   0               6m10s
kube-system                         pod/kube-controller-manager-198.19.16.112                            1/1     Running   0               4m43s
kube-system                         pod/kube-proxy-24kx2                                                 1/1     Running   0               4m50s
kube-system                         pod/kube-proxy-hwdcp                                                 1/1     Running   0               4m53s
kube-system                         pod/kube-proxy-ppwzx                                                 1/1     Running   0               5m54s
kube-system                         pod/kube-proxy-q64mk                                                 1/1     Running   0               5m2s
kube-system                         pod/kube-scheduler-198.19.131.134                                    1/1     Running   0               6m10s
kube-system                         pod/kube-scheduler-198.19.16.112                                     1/1     Running   0               4m42s
kube-system                         pod/kube-vip-198.19.131.134                                          1/1     Running   0               6m10s
kube-system                         pod/kube-vip-198.19.16.112                                           1/1     Running   0               4m42s
kube-system                         pod/vsphere-cloud-controller-manager-2cgxx                           1/1     Running   1 (4m14s ago)   4m50s
kube-system                         pod/vsphere-cloud-controller-manager-dq8xc                           1/1     Running   1 (4m17s ago)   4m53s
kube-system                         pod/vsphere-cloud-controller-manager-pwbjc                           1/1     Running   0               5m49s
kube-system                         pod/vsphere-cloud-controller-manager-sqjf9                           1/1     Running   1 (4m25s ago)   5m2s
kube-system                         pod/vsphere-csi-controller-7446c59556-87hj4                          5/5     Running   0               5m49s
kube-system                         pod/vsphere-csi-node-7hdfv                                           3/3     Running   0               5m2s
kube-system                         pod/vsphere-csi-node-7z8dv                                           3/3     Running   0               4m53s
kube-system                         pod/vsphere-csi-node-9vv8q                                           3/3     Running   0               4m50s
kube-system                         pod/vsphere-csi-node-c2bhx                                           3/3     Running   0               5m50s

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

@stmcginnis stmcginnis marked this pull request as draft February 28, 2023 20:39
@etungsten etungsten linked an issue Mar 1, 2023 that may be closed by this pull request
@stmcginnis stmcginnis marked this pull request as ready for review March 7, 2023 20:12
This updates the `cni` and `cni-plugin` packages in Bottlerocket.

Most of the time, the CNI plugins that are used are actually pulled down
via EKS. There are some CNIs, and perhaps metal and vmware variants,
where they need to use the locally installed CNI binaries. This gets
those local binaries up to date with the binaries that end up getting
pulled from EKS.

Signed-off-by: Sean McGinnis <[email protected]>
Copy link
Contributor

@zmrow zmrow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

☘️

@stmcginnis stmcginnis merged commit 51dace7 into bottlerocket-os:develop Mar 9, 2023
@stmcginnis stmcginnis deleted the update-cni branch March 9, 2023 17:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update CNI to 1.0+
3 participants