Skip to content

Conversation

@Danil-Grigorev
Copy link

@Danil-Grigorev Danil-Grigorev commented Jul 23, 2020

Increase timeout for machineSet readiness (Azure) to 30 min now.

Attempt to increase e2e tests stability on Azure, by increasing machineSet timeouts even further, and exposing more detailed information about failure reason.

@Danil-Grigorev Danil-Grigorev changed the title Increase timeout for machineSet readiness (Azure) BUG 1856344: [sig-cluster-lifecycle][Feature:Machines][Serial] Managed cluster should grow and decrease when scaling different machineSets simultaneously Jul 23, 2020
@openshift-ci-robot openshift-ci-robot added the bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. label Jul 23, 2020
@openshift-ci-robot
Copy link

@Danil-Grigorev: This pull request references Bugzilla bug 1856344, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.6.0) matches configured target release for branch (4.6.0)
  • bug is in the state NEW, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)
Details

In response to this:

BUG 1856344: [sig-cluster-lifecycle][Feature:Machines][Serial] Managed cluster should grow and decrease when scaling different machineSets simultaneously

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. label Jul 23, 2020
@openshift-ci-robot
Copy link

@Danil-Grigorev: This pull request references Bugzilla bug 1856344, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.6.0) matches configured target release for branch (4.6.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)
Details

In response to this:

BUG 1856344: [sig-cluster-lifecycle][Feature:Machines][Serial] Managed cluster should grow and decrease when scaling different machineSets simultaneously

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Danil-Grigorev
Copy link
Author

/retest

4 similar comments
@Danil-Grigorev
Copy link
Author

/retest

@Danil-Grigorev
Copy link
Author

/retest

@Danil-Grigorev
Copy link
Author

/retest

@Danil-Grigorev
Copy link
Author

/retest

@Danil-Grigorev Danil-Grigorev force-pushed the bump-machine-set-wait-timeout branch from 6cb0b3e to a818055 Compare July 24, 2020 07:59
@Danil-Grigorev
Copy link
Author

/retest

@Danil-Grigorev
Copy link
Author

@enxebre What do you think? Merging this will allow multiple PRs in different repos to be merged over the weekend. Just'd like to hear your opinion.

@enxebre
Copy link
Member

enxebre commented Jul 24, 2020

@enxebre What do you think? Merging this will allow multiple PRs in different repos to be merged over the weekend. Just'd like to hear your opinion.

How though? It seems this was retested up to 5 times before azure passed. Do we know why those failures happening? It seems this timeout increasing didn't help there.

@Danil-Grigorev
Copy link
Author

First time it failed on another Azure test with 15 minute timeout. 1 time the cluster failed to launch, and 2 times there was a merge conflict.

The timeouts are happening because Azure is overloaded, and there are failure messages in machine controller pods, complaining about exceeding resource quotas for VM creation..

But eventually the VMs are created and all is fine. Looking into some must-gather, all machines are up and running for all machinesets, but the tests failed saying they are not. The difference there is some 5 minute window between the test failure and the snapshot.

To mitigate possible failures due to constantly growing demand on a single environment running multiple e2e tests, I settled with the timeout of 30 minutes.

@enxebre
Copy link
Member

enxebre commented Jul 24, 2020

/approve
/lgtm
To help unblock PRs over the weekend

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jul 24, 2020
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: enxebre

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 24, 2020
@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

6 similar comments
@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

10 similar comments
@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Jul 25, 2020

@Danil-Grigorev: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-gcp-operator a818055 link /test e2e-gcp-operator

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit a79c589 into openshift:master Jul 25, 2020
@openshift-ci-robot
Copy link

@Danil-Grigorev: All pull requests linked via external trackers have merged: openshift/cluster-api-actuator-pkg#181. Bugzilla bug 1856344 has been moved to the MODIFIED state.

Details

In response to this:

BUG 1856344: [sig-cluster-lifecycle][Feature:Machines][Serial] Managed cluster should grow and decrease when scaling different machineSets simultaneously

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants