You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When installing rancher, the UI works but show different errors: etc/contrller master and scheduler are red and throw unhealthy errors.
Checking the pods, you'll find everything running expect the crashing cattle-node-agent:
vagrant@ubuntu-xenial:~$ kubectl logs -n cattle-system cattle-node-agent-w5rj4
ERROR: Please bind mount in the docker socket to /var/run/docker.sock
ERROR: example: docker run -v /var/run/docker.sock:/var/run/docker.sock ...
If we look into kubectl describe, it seems that volumes are mounted correctly:
Mounts:
/cattle-credentials from cattle-credentials (ro)
/etc/kubernetes from k8s-ssl (rw)
/run from run (rw)
/var/run from var-run (rw)
/var/run/secrets/kubernetes.io/serviceaccount from cattle-token-cpxw5 (ro)
To Reproduce
Install k3s using the official installer script, then install helm and rancher according to their docs. Wait a few minutes and check the pods as described above.
Expected behavior
Get cattle running like it does on the default k8s rancher cluster.
Additional context
The vagrant box image ubuntu/xenial64 (16.04.5 LTS) was used. I think this issue is related to containerd, which is used in k3s, where cattle expect the docker socket (which is not there). Additionally, I applied this dns workaround since I'm using a internal domain that couldn't be resolved from the pods otherwise.
The text was updated successfully, but these errors were encountered:
Describe the bug
When installing rancher, the UI works but show different errors: etc/contrller master and scheduler are red and throw unhealthy errors.
Checking the pods, you'll find everything running expect the crashing
cattle-node-agent
:vagrant@ubuntu-xenial:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE cattle-system cattle-cluster-agent-5f59578445-szlpw 1/1 Running 0 12m cattle-system cattle-node-agent-w5rj4 0/1 CrashLoopBackOff 7 12m cattle-system rancher-6756fb68d4-bq7q6 1/1 Running 1 14m cattle-system rancher-6756fb68d4-gxdpz 1/1 Running 1 14m cattle-system rancher-6756fb68d4-s8pj4 1/1 Running 0 14m kube-system cert-manager-6464494858-qffzh 1/1 Running 0 15m kube-system coredns-7748f7f6df-c595c 1/1 Running 0 15m kube-system helm-install-traefik-m7gmj 0/1 Completed 0 15m kube-system svclb-traefik-7fd99b58f5-w9r87 2/2 Running 0 15m kube-system tiller-deploy-6bbdcdc7ff-92pwv 1/1 Running 0 15m kube-system traefik-dcd66ffd7-p27lx 1/1 Running 0 15m
Error log:
If we look into
kubectl describe
, it seems that volumes are mounted correctly:To Reproduce
Install k3s using the official installer script, then install helm and rancher according to their docs. Wait a few minutes and check the pods as described above.
Expected behavior
Get cattle running like it does on the default k8s rancher cluster.
Additional context
The vagrant box image
ubuntu/xenial64
(16.04.5 LTS) was used. I think this issue is related to containerd, which is used in k3s, where cattle expect the docker socket (which is not there). Additionally, I applied this dns workaround since I'm using a internal domain that couldn't be resolved from the pods otherwise.The text was updated successfully, but these errors were encountered: