Skip to content

Conversation

@mtnbikenc
Copy link
Member

RHEL7 scaleup tests will always run but are not required

https://jira.coreos.com/browse/CORS-1059

Resubmit of #3748

RHEL7 scaleup tests will always run but are not required
@openshift-ci-robot openshift-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Aug 20, 2019
@kikisdeliveryservice
Copy link
Contributor

just wondering is there a reason to always run this o if we can start by just running on request?

@mtnbikenc
Copy link
Member Author

just wondering is there a reason to always run this o if we can start by just running on request?

The description of the linked Jira issue states:

The e2e-aws-scaleup-rhel7 job should always run on any repos which could directly impact RHEL7 scaleup. The test should be non-blocking.

We have had multiple instances where a change in MCO has broken scaleup and we don't know about it until much later. If we can run this test on MCO changes, then we can have more information about when it started failing and what change caused it to break. Hopefully, this means we can fix it sooner instead of digging around trying to figure out why it broke. That is at least the intent. If there is a better way to go about this I'm happy to discuss. :)

@kikisdeliveryservice
Copy link
Contributor

kikisdeliveryservice commented Aug 20, 2019

We have had multiple instances where a change in MCO has broken scaleup and we don't know about it until much later. If we can run this test on MCO changes, then we can have more information about when it started failing and what change caused it to break. Hopefully, this means we can fix it sooner instead of digging around trying to figure out why it broke. That is at least the intent. If there is a better way to go about this I'm happy to discuss.

gotcha

cc: @runcom PTAL as you had some comments/concerns the last time around. So will leave it to you to approve.

@runcom
Copy link
Member

runcom commented Aug 21, 2019

I'm super happy to have this test in MCO to catch early regressions in scaleup - up until now the reason for not having it was that it wasn't reliable (meaning, it was flaking a lot). If the situation is better now I'm +1 on adding it by default in MCO

@mtnbikenc
Copy link
Member Author

e2e test flakes are quite low now that only parallel tests are run, like all the other jobs. There is one test that seems to fail on the rhel scaleup job more frequently than other jobs and it is being tracked in https://bugzilla.redhat.com/show_bug.cgi?id=1726328.

Copy link
Contributor

@kikisdeliveryservice kikisdeliveryservice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let do this. :)

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 21, 2019
@sdodson
Copy link
Member

sdodson commented Aug 28, 2019

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Aug 28, 2019
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: kikisdeliveryservice, mtnbikenc, sdodson

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit 2cb5618 into openshift:master Aug 28, 2019
@openshift-ci-robot
Copy link
Contributor

@mtnbikenc: Updated the following 4 configmaps:

  • job-config-master-presubmits configmap in namespace ci using the following files:
    • key openshift-machine-config-operator-master-presubmits.yaml using file ci-operator/jobs/openshift/machine-config-operator/openshift-machine-config-operator-master-presubmits.yaml
  • job-config-4.1 configmap in namespace ci using the following files:
    • key openshift-machine-config-operator-release-4.1-presubmits.yaml using file ci-operator/jobs/openshift/machine-config-operator/openshift-machine-config-operator-release-4.1-presubmits.yaml
  • job-config-4.2 configmap in namespace ci using the following files:
    • key openshift-machine-config-operator-release-4.2-presubmits.yaml using file ci-operator/jobs/openshift/machine-config-operator/openshift-machine-config-operator-release-4.2-presubmits.yaml
  • job-config-4.3 configmap in namespace ci using the following files:
    • key openshift-machine-config-operator-release-4.3-presubmits.yaml using file ci-operator/jobs/openshift/machine-config-operator/openshift-machine-config-operator-release-4.3-presubmits.yaml
Details

In response to this:

RHEL7 scaleup tests will always run but are not required

https://jira.coreos.com/browse/CORS-1059

Resubmit of #3748

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mtnbikenc mtnbikenc deleted the mco-run-scaleup branch August 28, 2019 14:04
@kikisdeliveryservice
Copy link
Contributor

This test hasn't passed once since it has been added to the repo:

https://prow.svc.ci.openshift.org/job-history/origin-ci-test/pr-logs/directory/pull-ci-openshift-machine-config-operator-master-e2e-aws-scaleup-rhel7

Is it a good use of resources to run a test that always fails, i see that the fix you mentioned above is slated for 4.3

@runcom
Copy link
Member

runcom commented Aug 29, 2019

there has been an issue with that test and some rpm today

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants