-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Crier failing to report OOMed pods #210
Comments
I guess that you mean crier.. 😊 |
😢 |
I don't think this is crier - that only comes later. I think this is podutils interacting with kubernetes/kubernetes#117793 .
The above (from the linked Slack thread) indicates one of the two containers in the Pod is still alive - Not sure about what would be the solution yet. |
IMHO crier should still eventually report what happened here because it is observing the pod success / fail from the outside, but we see pods being perma reported as running (and if they really are running we should be enforcing a timeout via plank?) We could either hand off more of this to crier / plank to observe the test container failure, or we could introduce additional heart beating / signaling between the entry point and sidecar? |
True; you are right that Plank should eventually reap a pod stuck like this - if that does not happen then it's perhaps a separate bug, not sure if plank just uses too soft signals (sidecar iirc ignores some signals in certain state when it thinks it still needs to finish uploading artifacts) or it somehow fails to enforce a timeout entirely. |
Left a breadcrumb on kubernetes/kubernetes#117793 because I think it's worth noting when the project breaks itself with breaking changes :+) This problem is probably going to get worse as cgroup v2 rolls out to the rest of our clusters and more widely in the ecosystem. |
We actually appear to have uploaded all the artifacts ?? But we're just not recording a finished.json or the test log |
I'm seeing a lack of metadata (like the pod json) on pods that received SIGTERM and probably(?) weren't OOMed, at least there's no indication they would be, monitoring suggests they're actually not using much of the memory requested/limited. https://prow.k8s.io/job-history/gs/kubernetes-jenkins/logs/ci-etcd-robustness-arm64 https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-etcd-robustness-arm64/1829281539263827968 |
In that case we have some other weirdness: https://kubernetes.slack.com/archives/CCK68P2Q2/p1724976972581789?thread_ts=1724975407.113939&cid=CCK68P2Q2 |
Afaik when a container is OOMKilled it exits with code 137. However, there is no signal sent to other containers of the same pod. Thus, we would need a kind of heart-beat between A potential I prefer the second option. WDYT? |
That sounds reasonable but I haven't been in the prow internals for some time now, @petr-muller @krzyzacy might have thoughts 🙏 |
In 1.32, we have the oom behavior as configurable: kubernetes/kubernetes#126096. |
We use managed clusters for practical reasons, so we're generally getting the defaults (unless we convince EKS and GKE and someday AKS they should change the defaults). |
/cc will use this as a good ramp up issue and circle back in the next dev cycle |
In prow.k8s.io we're seeing some pods fail to report status beyond "running" that have really been OOMed (if we manually inspect via the build cluster)
See: https://kubernetes.slack.com/archives/C7J9RP96G/p1721926824700569 and other #testing-ops therads
/kind bug
The text was updated successfully, but these errors were encountered: