You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Problem:
There are a couple of pod status, I think, haven't documented. If you see the below example for nginx pod which runs for 10 seconds and terminates.
$ kubectl get pods --watch -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx 0/1 CrashLoopBackOff 12 43m 192.168.2.5 node03 <none>
nginx 1/1 Running 13 44m 192.168.2.5 node03 <none>
nginx 0/1 Completed 13 44m 192.168.2.5 node03 <none>
nginx 0/1 CrashLoopBackOff 13 45m 192.168.2.5 node03 <none>
Proposed Solution:
Please document CrashLoopBackOff and Completed pod status.
@makoscafee thank you for a fix. I have one query regarding:
CrashLoopBackOff | This means that one of the containers in the pod has exited unexpectedly, and perhaps with a non-zero error code even after restarting due to restart policy.
Event though the pod is completed without any error it is going into CrashLoopBackOff state.
This is a...
Problem:
There are a couple of pod status, I think, haven't documented. If you see the below example for
nginx
pod which runs for 10 seconds and terminates.Proposed Solution:
Please document
CrashLoopBackOff
andCompleted
pod status.Page to Update:
It should be somewhere here:
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase
The text was updated successfully, but these errors were encountered: