-
Notifications
You must be signed in to change notification settings - Fork 45
BUG 1856344: [sig-cluster-lifecycle][Feature:Machines][Serial] Managed cluster should grow and decrease when scaling different machineSets simultaneously #181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@Danil-Grigorev: This pull request references Bugzilla bug 1856344, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
@Danil-Grigorev: This pull request references Bugzilla bug 1856344, which is valid. 3 validation(s) were run on this bug
DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/retest |
4 similar comments
|
/retest |
|
/retest |
|
/retest |
|
/retest |
6cb0b3e to
a818055
Compare
|
/retest |
|
@enxebre What do you think? Merging this will allow multiple PRs in different repos to be merged over the weekend. Just'd like to hear your opinion. |
How though? It seems this was retested up to 5 times before azure passed. Do we know why those failures happening? It seems this timeout increasing didn't help there. |
|
First time it failed on another Azure test with 15 minute timeout. 1 time the cluster failed to launch, and 2 times there was a merge conflict. The timeouts are happening because Azure is overloaded, and there are failure messages in machine controller pods, complaining about exceeding resource quotas for VM creation.. But eventually the VMs are created and all is fine. Looking into some must-gather, all machines are up and running for all machinesets, but the tests failed saying they are not. The difference there is some 5 minute window between the test failure and the snapshot. To mitigate possible failures due to constantly growing demand on a single environment running multiple e2e tests, I settled with the timeout of 30 minutes. |
|
/approve |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: enxebre The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
6 similar comments
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
10 similar comments
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
@Danil-Grigorev: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
@Danil-Grigorev: All pull requests linked via external trackers have merged: openshift/cluster-api-actuator-pkg#181. Bugzilla bug 1856344 has been moved to the MODIFIED state. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Increase timeout for machineSet readiness (Azure) to
30 minnow.Attempt to increase e2e tests stability on Azure, by increasing
machineSettimeouts even further, and exposing more detailed information about failure reason.