Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to create csi-node-driver pods #9701

Open
whereyourspace opened this issue Jan 11, 2025 · 3 comments
Open

Failed to create csi-node-driver pods #9701

whereyourspace opened this issue Jan 11, 2025 · 3 comments

Comments

@whereyourspace
Copy link

Hello. Sorry for my English. This text was written using web translation tools.
I'm trying to create a k8s cluster on Alpine VMs using kubeadm for educational purposes, but I can't install Calico. I want to learn Kubernetes deployment.

Context

Here are all the steps taken to prepare the VM and deploy K8S:

# Prepare control-plane node
doas -s
touch /etc/cloud/cloud-init.disabled
apk update && apk upgrade
apk add vim lsblk btop curl
mkfs.ext4 /dev/sdb && e2label /dev/sdb kubelet
mkdir -p /var/lib/kubelet && echo "LABEL=kubelet    /var/lib/kubelet    ext4    defaults    1 1" >> /etc/fstab && mount -a
echo net.ipv4.ip_forward=1 | tee -a /etc/sysctl.conf && sysctl -p
apk add containerd kubeadm kubectl kubelet
service kubelet start && rc-update add kubelet default
sed -i 's/bin_dir.*$/bin_dir\ =\ "\/opt\/cni\/bin\/"/' /etc/containerd/config.toml && service containerd restart
echo "/var/lib/kubelet /var/lib/kubelet    none    defaults,bind 1 1" >> /etc/fstab && mount -a

# Deploy control-plane components
kubeadm init --control-plane-endpoint 192.168.0.210 --pod-network-cidr 172.16.0.0/16
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/tigera-operator.yaml
curl https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/custom-resources.yaml -O
sed -i 's/cidr.*$/cidr: 172.16.0.0\/24/' custom-resources.yaml
kubectl create -f custom-resources.yaml

without /var/lib/kubelet /var/lib/kubelet none defaults,bind 1 1 in /etc/fstab some modules throw errors and ask to share the mount directory. I want to use a different disk for Kubernetes:

/home/alpine # lsblk
NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda    8:0    0 24.2G  0 disk /
sdb    8:16   0  100G  0 disk /var/lib/kubelet/pods/5e3edebd-479e-4d1d-bdc3-7674d7d87da5/volume-subpaths/tigera-ca-bundle/calico-kube-controllers/1
                              /var/lib/kubelet/pods/d2c55212-e68f-4b2c-873a-c5098d2e8be2/volume-subpaths/tigera-ca-bundle/calico-node/1
                              /var/lib/kubelet/pods/60cd31ca-4815-498d-81ce-2c32f5fcb5ef/volume-subpaths/tigera-ca-bundle/calico-typha/1
                              /var/lib/kubelet
                              /var/lib/kubelet
sr0   11:0    1    4M  0 rom  

calico pods status:

/home/alpine # kubectl get pods -n calico-system                                                                                                                                           

NAME                                       READY   STATUS                 RESTARTS       AGE
calico-kube-controllers-566f4c5577-srq24   1/1     Running                0              23m
calico-node-wzrq8                          1/1     Running                0              23m
calico-typha-8558b88776-bvfl6              1/1     Running                0              23m
csi-node-driver-pc7ms                      0/2     CreateContainerError   8 (113s ago)   23m


/home/alpine # kubectl describe pods -n calico-system csi-node-driver-pc7ms

...
...
...

Events:
  Type     Reason                  Age                    From               Message
  ----     ------                  ----                   ----               -------
  Normal   Scheduled               37m                    default-scheduler  Successfully assigned calico-system/csi-node-driver-pc7ms to alpine-k8s-master-node-1
  Warning  NetworkNotReady         37m (x16 over 37m)     kubelet            network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
  Warning  FailedCreatePodSandBox  37m                    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6cb6c2296aa1798abbe7176291b4dce59a1e4ba303370b59cd31084a73d19c0a": plugin type="calico" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/

Your Environment

  • Calico version(v3.29.1)
  • Calico dataplane (default on linux)
  • Orchestrator version (e.g. kubernetes, mesos, rkt):
/home/alpine # kubectl version
Client Version: v1.31.3
Kustomize Version: v5.4.2
Server Version: v1.31.4
/home/alpine # kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"31", GitVersion:"v1.31.3", GitCommit:"c83cbee114ddb732cdc06d3d1b62c9eb9220726f", GitTreeState:"archive", BuildDate:"2024-11-25T18:12:25Z", GoVersion:"go1.23.3", Compiler:"gc", Platform:"linux/amd64"}
  • Operating System and version:
/home/alpine # cat /etc/alpine-release 
3.21.0

If you need any more information, let me know. Thank you.

@whereyourspace
Copy link
Author

whereyourspace commented Jan 11, 2025

UPD.

/var/lib/calico/nodename file available in calico-node container but it content different from hostname:

/home/alpine # kubectl exec -n calico-system -it calico-node-wzrq8 -c calico-node -- bash
[root@ALPINE-k8s-master-node-1 /]# ls -lh /var/lib/calico/nodename 
-rw-r--r-- 1 root root 24 Jan 11 03:07 /var/lib/calico/nodename
[root@ALPINE-k8s-master-node-1 /]# cat /var/lib/calico/nodename 
alpine-k8s-master-node-1[root@ALPINE-k8s-master-node-1 /]#
[root@ALPINE-k8s-master-node-1 /]# cat /etc/hostname 
ALPINE-k8s-master-node-1
[root@ALPINE-k8s-master-node-1 /]# exit
/home/alpine # cat /etc/hostname 
ALPINE-k8s-master-node-1 

@whereyourspace
Copy link
Author

whereyourspace commented Jan 11, 2025

I think I found the problem. csi-node-driver volumes does not contain the required(?):

/home/alpine # kubectl get ds csi-node-driver -o yaml -n calico-system
apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "1"
  creationTimestamp: "2025-01-11T03:06:08Z"
  generation: 1
  name: csi-node-driver
  namespace: calico-system
  ownerReferences:
  - apiVersion: operator.tigera.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: Installation
    name: default
    uid: c9a88568-17f2-4a51-bd40-ac0f07faf24b
  resourceVersion: "674"
  uid: 3cee57ca-e4a7-497a-ada3-82de529025b9
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: csi-node-driver
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/name: csi-node-driver
        k8s-app: csi-node-driver
        name: csi-node-driver
    spec:
      containers:
      - args:
        - --nodeid=$(KUBE_NODE_NAME)
        - --loglevel=$(LOG_LEVEL)
        env:
        - name: LOG_LEVEL
          value: warn
        - name: KUBE_NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        image: docker.io/calico/csi:v3.29.1
        imagePullPolicy: IfNotPresent
        name: calico-csi
        resources: {}
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            drop:
            - ALL
          privileged: true
          runAsGroup: 0
          runAsNonRoot: false
          runAsUser: 0
          seccompProfile:
            type: RuntimeDefault
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/run
          name: varrun
        - mountPath: /csi
          name: socket-dir
        - mountPath: /var/lib/kubelet
          mountPropagation: Bidirectional
          name: kubelet-dir
      - args:
        - --v=5
        - --csi-address=$(ADDRESS)
        - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
        env:
        - name: ADDRESS
          value: /csi/csi.sock
        - name: DRIVER_REG_SOCK_PATH
          value: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
        - name: KUBE_NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        image: docker.io/calico/node-driver-registrar:v3.29.1
        imagePullPolicy: IfNotPresent
        name: csi-node-driver-registrar
        resources: {}
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            drop:
            - ALL
          privileged: true
          runAsGroup: 0
          runAsNonRoot: false
          runAsUser: 0
          seccompProfile:
            type: RuntimeDefault
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /csi
          name: socket-dir
        - mountPath: /registration
          name: registration-dir
      dnsPolicy: ClusterFirst
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-node-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: csi-node-driver
      serviceAccountName: csi-node-driver
      terminationGracePeriodSeconds: 30
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
        operator: Exists
      - effect: NoExecute
        operator: Exists
      volumes:
      - hostPath:
          path: /var/run
          type: ""
        name: varrun
      - hostPath:
          path: /var/lib/kubelet
          type: Directory
        name: kubelet-dir
      - hostPath:
          path: /var/lib/kubelet/plugins/csi.tigera.io
          type: DirectoryOrCreate
        name: socket-dir
      - hostPath:
          path: /var/lib/kubelet/plugins_registry
          type: Directory
        name: registration-dir

@mazdakn
Copy link
Member

mazdakn commented Jan 14, 2025

@whereyourspace is your issue solved now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants