You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
➜ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-192-168-26-140.ec2.internal Ready <none> 6m26s v1.15.10 192.168.26.140 100.24.122.15 Bottlerocket OS 0.3.1 5.4.16 containerd://1.3.3+unknown
ip-192-168-62-190.ec2.internal Ready <none> 6m27s v1.15.10 192.168.62.190 3.88.114.1 Bottlerocket OS 0.3.1 5.4.16 containerd://1.3.3+unknown
Default Storage Class:
➜ kubectl get sc
NAME PROVISIONER AGE
gp2 (default) kubernetes.io/aws-ebs 14m
What I expected to happen:
I was able to run the same k8s job with docker + Non-Bottlerocket OS based EKS Cluster. I was expecting the same with this containerd + Bottlerocket OS based EKS Cluster.
What actually happened:
Upon applying above-mentioned manifest file following is the status of my cluster:
➜ ~ kubectl get sc,pv,pvc,jobs,pods
NAME PROVISIONER AGE
storageclass.storage.k8s.io/gp2 (default) kubernetes.io/aws-ebs 15m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-89ba1eab-482c-4c7d-aba7-daa4dc616a34 1Gi RWO Delete Bound default/job-pv-claim gp2 47s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/job-pv-claim Bound pvc-89ba1eab-482c-4c7d-aba7-daa4dc616a34 1Gi RWO gp2 53s
NAME COMPLETIONS DURATION AGE
job.batch/pi 0/1 52s 52s
NAME READY STATUS RESTARTS AGE
pod/pi-bqcnf 0/1 ContainerCreating 0 53s
I received the following error upon describing the pod:
➜ kubectl describe pod/pi-bqcnf
Name: pi-bqcnf
Namespace: default
Priority: 0
Node: ip-192-168-26-140.ec2.internal/192.168.26.140
Start Time: Fri, 27 Mar 2020 00:01:52 +0530
Labels: controller-uid=0817c902-a99e-424b-b194-0518168f0e4c
job-name=pi
Annotations: kubernetes.io/psp: eks.privileged
Status: Pending
IP:
IPs: <none>
Controlled By: Job/pi
Containers:
pi:
Container ID:
Image: perl
Image ID:
Port: <none>
Host Port: <none>
Command:
perl
-Mbignum=bpi
-wle
print bpi(2000)
&&
df
-Th
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/mnt from job-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hblvr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
job-pv-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: job-pv-claim
ReadOnly: false
default-token-hblvr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hblvr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 71s default-scheduler Successfully assigned default/pi-bqcnf to ip-192-168-26-140.ec2.internal
Normal SuccessfulAttachVolume 67s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-89ba1eab-482c-4c7d-aba7-daa4dc616a34"
Warning FailedMount 24s (x7 over 62s) kubelet, ip-192-168-26-140.ec2.internal MountVolume.MountDevice failed for volume "pvc-89ba1eab-482c-4c7d-aba7-daa4dc616a34" : executable file not found in $PATH
How to reproduce the problem:
The error can be easily reproduced by using the above-mentioned manifest files. I am currently unsure that the issue is with containerd or Bottlerocket OS.
The text was updated successfully, but these errors were encountered:
What's most likely happening is that it's failing find filesystem formatting tools to format the filesystem initially. Bottlerocket does not include e2fsprogs or xfsprogs in the OS.
Our recommendation is to use CSI drivers. Can you try setting up the EBS CSI driver for your cluster and see if that helps?
Platform I'm building on:
I have installed EKS Cluster via EKSCTL. Please find below the cluster yml file.
Cluster Node Status:
Default Storage Class:
What I expected to happen:
I was able to run the same k8s job with docker + Non-Bottlerocket OS based EKS Cluster. I was expecting the same with this containerd + Bottlerocket OS based EKS Cluster.
What actually happened:
Upon applying above-mentioned manifest file following is the status of my cluster:
I received the following error upon describing the pod:
How to reproduce the problem:
The error can be easily reproduced by using the above-mentioned manifest files. I am currently unsure that the issue is with containerd or Bottlerocket OS.
The text was updated successfully, but these errors were encountered: