-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
E2E framework should fail faster when terminal provisioning failures occur #6239
Comments
/milestone v1.2 This is somehow related to the ongoing discussion about how to report terminal failures, e.g #6218 |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/triage accepted |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen I think this is still a nice to have if someone has time to take it on. |
@killianmuldoon: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/lifecycle frozen |
(doing some cleanup on old issues without updates) |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There is currently no verification of progress when checking for all control plane machines to be available, all machine deployment machines to be available, scaling operations for KCP/MD, and rolling out updates for KCP/MD today, so if there is a terminal failure (failure message/reason set on an owned Machine), then ;you need to wait for the
wait-machine-upgrade
,wait-control-plane
, and/orwait-worker-nodes
timeout to trigger. It would be nice if there was also a separate timeout for progress to be made that would help these types of failures to cause the test to fail quicker and in an easier to debug way.Originally posted by @detiber in #6143 (comment)
The text was updated successfully, but these errors were encountered: