Skip to content

OCPBUGS-2493: Fix TestUnmanagedDNSToManagedDNSInternal E2E test race conditions#845

Merged
openshift-merge-robot merged 1 commit intoopenshift:masterfrom
gcs278:dns-unmanaged-e2e
Oct 20, 2022
Merged

OCPBUGS-2493: Fix TestUnmanagedDNSToManagedDNSInternal E2E test race conditions#845
openshift-merge-robot merged 1 commit intoopenshift:masterfrom
gcs278:dns-unmanaged-e2e

Conversation

@gcs278
Copy link
Contributor

@gcs278 gcs278 commented Oct 18, 2022

OCPBUGS-2493: Fix TestUnmanagedDNSToManagedDNSInternalIngressController E2E flake

test/e2e/unmanaged_dns_test.go:

  • Fixed service deletion race condition by ensuring loadbalancer service changed
  • Fixed check for IsServiceInternal which was actually checking for internal
    when should have been checking for external
  • Fixed IsServiceInternal using outdated service object

test/e2e/util_test.go:

  • Add HTTP responce body close in waitForHTTPClientCondition (the main issue)
  • Increased timeout to 10 minutes for verifyInternalIngressController and verifyExternalIngressController
  • Added curl pod restart in case of failure to verifyInternalIngressController
  • Added wait for DNS Resolution in waitForHTTPClientCondition for debuggability

@openshift-ci-robot openshift-ci-robot added jira/severity-important Referenced Jira bug's severity is important for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. labels Oct 18, 2022
@openshift-ci-robot
Copy link
Contributor

@gcs278: This pull request references Jira Issue OCPBUGS-2493, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.12.0) matches configured target version for branch (4.12.0)
  • bug is in the state New, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @lihongan

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

OCPBUGS-2493: Fix TestUnmanagedDNSToManagedDNSInternalIngressController E2E flake

test/e2e/unmanaged_dns_test.go:

  • Fixed service deletion race condition by ensuring UID change
  • Fixed check for IsServiceInternal which was actually checking for internal when should have been checking for external
  • Fixed IsServiceInternal using outdated service object

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Miciah
Copy link
Contributor

Miciah commented Oct 18, 2022

/test e2e-azure-operator

@Miciah
Copy link
Contributor

Miciah commented Oct 18, 2022

e2e-aws-operator failed because TestUnmanagedDNSToManagedDNSInternalIngressController failed, but the failure is different now:

util_test.go:88: retrying client call due to: Get "http://a5d2a6a35aec849c99e6189f6a877f2e-492060313.us-west-2.elb.amazonaws.com/": EOF
    unmanaged_dns_test.go:294: failed to verify connectivity with workload with reqURL http://a5d2a6a35aec849c99e6189f6a877f2e-492060313.us-west-2.elb.amazonaws.com/ using external client: timed out waiting for the condition

We might just need a longer timeout on the polling loop; the scope change is known to take as long as ~6 minutes on AWS: https://bugzilla.redhat.com/show_bug.cgi?id=2034795

@Miciah
Copy link
Contributor

Miciah commented Oct 18, 2022

e2e-azure-operator failed too, with a similar failure:

util_test.go:88: retrying client call due to: Get "http://52.143.243.13/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
    unmanaged_dns_test.go:294: failed to verify connectivity with workload with reqURL http://52.143.243.13/ using external client: timed out waiting for the condition

Searching for *.unmanaged-migrated-internal in the ingress-operator logs, it looks like the dnsrecord is created unmanaged with an internal IP address, as expected:

2022-10-18T05:50:54.927Z	INFO	operator.ingress_controller	ingress/controller.go:1036	created dnsrecord	{"dnsrecord": {"metadata":{"name":"unmanaged-migrated-internal-wildcard","namespace":"openshift-ingress-operator","uid":"6cf23620-0cfb-450e-9a15-a72c0d2f8b7c","resourceVersion":"44312","generation":1,"creationTimestamp":"2022-10-18T05:50:54Z","labels":{"ingresscontroller.operator.openshift.io/owning-ingresscontroller":"unmanaged-migrated-internal"},"ownerReferences":[{"apiVersion":"operator.openshift.io/v1","kind":"IngressController","name":"unmanaged-migrated-internal","uid":"2a6d5ac0-5722-45a2-bf7d-3306fd4afd8a","controller":true,"blockOwnerDeletion":true}],"finalizers":["operator.openshift.io/ingress-dns"],"managedFields":[{"manager":"ingress-operator","operation":"Update","apiVersion":"ingress.operator.openshift.io/v1","time":"2022-10-18T05:50:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:finalizers":{".":{},"v:\"operator.openshift.io/ingress-dns\"":{}},"f:labels":{".":{},"f:ingresscontroller.operator.openshift.io/owning-ingresscontroller":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a6d5ac0-5722-45a2-bf7d-3306fd4afd8a\"}":{}}},"f:spec":{".":{},"f:dnsManagementPolicy":{},"f:dnsName":{},"f:recordTTL":{},"f:recordType":{},"f:targets":{}}}}]},"spec":{"dnsName":"*.unmanaged-migrated-internal.ci-op-ikmtjxjn-04a70.ci.azure.devcluster.openshift.com.","targets":["10.0.128.7"],"recordType":"A","recordTTL":30,"dnsManagementPolicy":"Unmanaged"},"status":{}}}
2022-10-18T05:50:54.928Z	INFO	operator.dns_controller	controller/controller.go:121	reconciling	{"request": "openshift-ingress-operator/unmanaged-migrated-internal-wildcard"}
2022-10-18T05:50:54.945Z	INFO	operator.dns_controller	dns/controller.go:198	DNS record not published	{"record": {"dnsName":"*.unmanaged-migrated-internal.ci-op-ikmtjxjn-04a70.ci.azure.devcluster.openshift.com.","targets":["10.0.128.7"],"recordType":"A","recordTTL":30,"dnsManagementPolicy":"Unmanaged"}}

Then it is updated to managed with a public IP address, again as expected:

2022-10-18T05:51:55.280Z	INFO	operator.dns	dns/controller.go:312	upserted DNS record	{"record": {"dnsName":"*.unmanaged-migrated-internal.ci-op-ikmtjxjn-04a70.ci.azure.devcluster.openshift.com.","targets":["10.0.128.7"],"recordType":"A","recordTTL":30,"dnsManagementPolicy":"Managed"}, "zone": {"id":"/subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-ikmtjxjn-04a70-8svq6-rg/providers/Microsoft.Network/privateDnsZones/ci-op-ikmtjxjn-04a70.ci.azure.devcluster.openshift.com"}}
2022-10-18T05:51:55.280Z	INFO	operator.dns_controller	dns/controller.go:359	published DNS record to zone	{"record": {"dnsName":"*.unmanaged-migrated-internal.ci-op-ikmtjxjn-04a70.ci.azure.devcluster.openshift.com.","targets":["10.0.128.7"],"recordType":"A","recordTTL":30,"dnsManagementPolicy":"Managed"}, "dnszone": {"id":"/subscriptions/d38f1e38-4bed-438e-b227-833f997adf6a/resourceGroups/ci-op-ikmtjxjn-04a70-8svq6-rg/providers/Microsoft.Network/privateDnsZones/ci-op-ikmtjxjn-04a70.ci.azure.devcluster.openshift.com"}}

So it isn't obvious to me what is going wrong.

return false, nil
} else if ingresscontroller.IsServiceInternal(lbService) {
// The service got recreated, but is not external
t.Fatalf("load balancer %s is internal but should be external", lbService.Name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this just return an error? We catch and report any error after the call to PollImmediate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means the service was successfully deleted and recreated, but still is not what we expect. There isn't another process that would recreate it again as far as I know, so we are dead in the water, might as well stop the test opposed to keep going.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Andy's point is that if you returned an error here from the polling loop, then the if err != nil { t.Fatalf(...) } immediately after the loop would still suffice to terminate the test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah sorry I thought you meant t.Errorf vs. t.Fatalf.
Will fix in next push.

@gcs278 gcs278 force-pushed the dns-unmanaged-e2e branch from 78e844f to e45dae7 Compare October 18, 2022 14:17
@gcs278
Copy link
Contributor Author

gcs278 commented Oct 18, 2022

@Miciah must be another problem. Increased verifyInternalIngressController to 8 minutes to see if that helps.

I've never seen EOF or Client.Timeout exceeded while awaiting headers but they seem related. Do you believe this to be related to a load balances being initialized? Or maybe DNS for the external loadbalancer domain hasn't initialized (AWS)?

@gcs278
Copy link
Contributor Author

gcs278 commented Oct 18, 2022

/test e2e-azure-operator

@gcs278 gcs278 force-pushed the dns-unmanaged-e2e branch 2 times, most recently from cec3bfb to a237518 Compare October 18, 2022 14:55
@gcs278
Copy link
Contributor Author

gcs278 commented Oct 18, 2022

/test e2e-azure-operator

@gcs278
Copy link
Contributor Author

gcs278 commented Oct 18, 2022

I added resp.body.Close() and req.Close = true to util_test.go since I found https://bugzilla.redhat.com/show_bug.cgi?id=2037447 and some old threads about closing HTTP connection before trying again.

@gcs278
Copy link
Contributor Author

gcs278 commented Oct 18, 2022

Whoops I broke something..hang on

@gcs278 gcs278 force-pushed the dns-unmanaged-e2e branch from a237518 to 8060eda Compare October 18, 2022 15:52
@Miciah
Copy link
Contributor

Miciah commented Oct 18, 2022

@Miciah must be another problem. Increased verifyInternalIngressController to 8 minutes to see if that helps.

I've never seen EOF or Client.Timeout exceeded while awaiting headers but they seem related. Do you believe this to be related to a load balances being initialized? Or maybe DNS for the external loadbalancer domain hasn't initialized (AWS)?

The ingress-operator logs in the e2e-azure-operator job indicated that DNS had been updated. I didn't check the ingress-operator logs in the e2e-aws-operator job. However, my guess would be that it's a delay in the LB initializing.

@Miciah
Copy link
Contributor

Miciah commented Oct 18, 2022

/test e2e-azure-operator

@Miciah
Copy link
Contributor

Miciah commented Oct 18, 2022

e2e-aws-operator failed, and the ingress clusteroperator is reporting the following:

{Operator progressing (Reconciling): ingresscontroller "sourcerangesstatus" is progressing: IngressControllerProgressing: One or more status conditions indicate progressing: LoadBalancerProgressing=True (OperandsProgressing: One or more managed resources are progressing: You have manually edited an operator-managed object. You must revert your modifications by removing the service.beta.kubernetes.io/load-balancer-source-ranges annotation on service "router-sourcerangesstatus". You can use the new AllowedSourceRanges API field on the ingresscontroller object to configure this setting instead.).  Operator progressing (Reconciling): ingresscontroller "sourcerangesstatus" is progressing: IngressControllerProgressing: One or more status conditions indicate progressing: LoadBalancerProgressing=True (OperandsProgressing: One or more managed resources are progressing: You have manually edited an operator-managed object. You must revert your modifications by removing the service.beta.kubernetes.io/load-balancer-source-ranges annotation on service "router-sourcerangesstatus". You can use the new AllowedSourceRanges API field on the ingresscontroller object to configure this setting instead.).}

@Miciah
Copy link
Contributor

Miciah commented Oct 18, 2022

Also, TestUnmanagedDNSToManagedDNSIngressController had a nil-pointer dereference in waitForHTTPClientCondition. Probably resp was nil in defer resp.Body.Close().

@gcs278 gcs278 force-pushed the dns-unmanaged-e2e branch from 8060eda to ec4a671 Compare October 18, 2022 18:29
@gcs278
Copy link
Contributor Author

gcs278 commented Oct 18, 2022

Also, TestUnmanagedDNSToManagedDNSIngressController had a nil-pointer dereference in waitForHTTPClientCondition. Probably resp was nil in defer resp.Body.Close().

Yea sorry I thought I fixed it in the last push but forget to do a git add

@Miciah
Copy link
Contributor

Miciah commented Oct 18, 2022

/test e2e-azure-operator

@gcs278 gcs278 force-pushed the dns-unmanaged-e2e branch from ec4a671 to 2e51367 Compare October 18, 2022 18:59
@gcs278
Copy link
Contributor Author

gcs278 commented Oct 18, 2022

/test e2e-azure-operator

@gcs278
Copy link
Contributor Author

gcs278 commented Oct 18, 2022

Successful e2e-aws-operator run
Edit: well at least for cluster-ingress-operator E2E tests, failed on must gather...

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Oct 19, 2022
@Miciah
Copy link
Contributor

Miciah commented Oct 19, 2022

/approve

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 19, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Miciah

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 19, 2022
…er E2E flake

`test/e2e/unmanaged_dns_test.go`:
  - Fixed service deletion race condition by ensuring loadbalancer service changed
  - Fixed check for IsServiceInternal which was actually checking for internal
    when should have been checking for external
  - Fixed IsServiceInternal using outdated service object
`test/e2e/util_test.go`:
  - Add HTTP responce body close in waitForHTTPClientCondition (the main issue)
  - Increased timeout to 10 minutes for verifyInternalIngressController and verifyExternalIngressController
  - Added curl pod restart in case of failure to verifyInternalIngressController
  - Added wait for DNS Resolution in waitForHTTPClientCondition for debuggability
@gcs278 gcs278 force-pushed the dns-unmanaged-e2e branch from 49c5983 to fd2e991 Compare October 19, 2022 23:35
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Oct 19, 2022
@Miciah
Copy link
Contributor

Miciah commented Oct 19, 2022

/test e2e-azure-operator
/test e2e-gcp-operator
/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Oct 19, 2022
@Miciah
Copy link
Contributor

Miciah commented Oct 20, 2022

e2e-gcp-operator failed because TestRouterCompressionOperation failed (which #843 should fix); TestUnmanagedDNSToManagedDNSInternal passed. 🎉!

@Miciah
Copy link
Contributor

Miciah commented Oct 20, 2022

e2e-azure-operator passed. 🎉 !

@gcs278
Copy link
Contributor Author

gcs278 commented Oct 20, 2022

e2e-aws-operator has successful e2e but failed on e2e-aws-operator-ipi-deprovision-deprovision. other issues look to be install issues.
/retest
/test e2e-azure-operator
/test e2e-gcp-operator

@Miciah
Copy link
Contributor

Miciah commented Oct 20, 2022

e2e-aws-operator failed because etcd failed to come up.
/test e2e-aws-operator

e2e-gcp-ovn-serial failed because the disruption/ingress-to-oauth-server connection/new and ingress-to-console connection/new disruption tests failed.
/test e2e-gcp-ovn-serial

@Miciah
Copy link
Contributor

Miciah commented Oct 20, 2022

e2e-gcp-operator failed on both TestRouterCompressionOperation and TestInternalLoadBalancer. The latter is interesting.

The test output shows that TestInternalLoadBalancer timed out waiting for ingresscontroller to become available before the load balancer became ready. From the ingress-operator logs, I see the following timeline:

  • At 2022-10-20T03:10:19.895Z, the operator created the service.
  • At 2022-10-20T03:14:34.921Z, the service was still pending.
  • At 2022-10-20T03:15:19.125Z, the ingresscontroller was deleted, presumably because the test timed out.

The kube-controller-manager logs show that k-c-m ensured the LB at 03:14:34.646931.

Strangely, I cannot find the create for that particular service in the kube-apiserver access logs, even though I can see various get and patch requests for that service and creates for other services that the operator created.

My guess is that kube-apiserver or k-c-m was overloaded, and the create in kube-apiserver, the watch in k-c-m, or the provisioning of the LB in GCP was delayed, causing TestInternalLoadBalancer to hit its 5-minute timeout. If this isn't a one-off flake, we might need to reduce the parallelism of the tests or increase timeouts where provisioning load balancers is involved.

@gcs278
Copy link
Contributor Author

gcs278 commented Oct 20, 2022

Ack, seems like TestInternalLoadBalancer was the first time I've ever seen that fail. Search.Ci - but the oldest one was with a ton of load-balancer related failures, so I discount that one. Something to keep an eye on.

e2e-azure-operator failed on TestRouterCompressionOperation
/test e2e-azure-operator
/test e2e-gcp-operator

e2e-aws-operator appears to have passed E2E tests, but looks like it's going to fail on other stuff.

@Miciah
Copy link
Contributor

Miciah commented Oct 20, 2022

/test e2e-aws-operator

@Miciah
Copy link
Contributor

Miciah commented Oct 20, 2022

/retest-required

@gcs278
Copy link
Contributor Author

gcs278 commented Oct 20, 2022

e2e-aws-operator: e2e-aws-operator-ipi-deprovision-deprovision container test failure again with other Operator progressing (NodeInstaller): NodeInstallerProgressing issues.

/retest-required

@gcs278
Copy link
Contributor Author

gcs278 commented Oct 20, 2022

e2e-gcp-ovn-serial failed on known distruption issues
/retest-required

@Miciah
Copy link
Contributor

Miciah commented Oct 20, 2022

/override e2e-gcp-ovn-serial

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 20, 2022

@Miciah: /override requires failed status contexts, check run or a prowjob name to operate on.
The following unknown contexts/checkruns were given:

  • e2e-gcp-ovn-serial

Only the following failed contexts/checkruns were expected:

  • ci/prow/e2e-aws-operator
  • ci/prow/e2e-aws-ovn
  • ci/prow/e2e-aws-ovn-single-node
  • ci/prow/e2e-aws-ovn-upgrade
  • ci/prow/e2e-azure-operator
  • ci/prow/e2e-azure-ovn
  • ci/prow/e2e-gcp-operator
  • ci/prow/e2e-gcp-ovn-serial
  • ci/prow/images
  • ci/prow/unit
  • ci/prow/verify
  • pull-ci-openshift-cluster-ingress-operator-master-e2e-aws-operator
  • pull-ci-openshift-cluster-ingress-operator-master-e2e-aws-ovn
  • pull-ci-openshift-cluster-ingress-operator-master-e2e-aws-ovn-single-node
  • pull-ci-openshift-cluster-ingress-operator-master-e2e-aws-ovn-upgrade
  • pull-ci-openshift-cluster-ingress-operator-master-e2e-azure-operator
  • pull-ci-openshift-cluster-ingress-operator-master-e2e-azure-ovn
  • pull-ci-openshift-cluster-ingress-operator-master-e2e-gcp-operator
  • pull-ci-openshift-cluster-ingress-operator-master-e2e-gcp-ovn-serial
  • pull-ci-openshift-cluster-ingress-operator-master-images
  • pull-ci-openshift-cluster-ingress-operator-master-unit
  • pull-ci-openshift-cluster-ingress-operator-master-verify
  • tide

If you are trying to override a checkrun that has a space in it, you must put a double quote on the context.

Details

In response to this:

/override e2e-gcp-ovn-serial

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Miciah
Copy link
Contributor

Miciah commented Oct 20, 2022

/override ci/prow/e2e-gcp-ovn-serial

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 20, 2022

@Miciah: Overrode contexts on behalf of Miciah: ci/prow/e2e-gcp-ovn-serial

Details

In response to this:

/override ci/prow/e2e-gcp-ovn-serial

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Miciah
Copy link
Contributor

Miciah commented Oct 20, 2022

Looks like e2e-aws-operator is going to fail because the e2e-aws-operator-gather-must-gather step failed and because kube-apiserver, kube-controller-manager, and kube-scheduler reported NodeInstallerProgressing. The E2E tests all passed.
/override ci/prow/e2e-aws-operator

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 20, 2022

@Miciah: Overrode contexts on behalf of Miciah: ci/prow/e2e-aws-operator

Details

In response to this:

Looks like e2e-aws-operator is going to fail because the e2e-aws-operator-gather-must-gather step failed and because kube-apiserver, kube-controller-manager, and kube-scheduler reported NodeInstallerProgressing. The E2E tests all passed.
/override ci/prow/e2e-aws-operator

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-merge-robot openshift-merge-robot merged commit eddd91e into openshift:master Oct 20, 2022
@openshift-ci-robot
Copy link
Contributor

@gcs278: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-2493 has been moved to the MODIFIED state.

Details

In response to this:

OCPBUGS-2493: Fix TestUnmanagedDNSToManagedDNSInternalIngressController E2E flake

test/e2e/unmanaged_dns_test.go:

  • Fixed service deletion race condition by ensuring loadbalancer service changed
  • Fixed check for IsServiceInternal which was actually checking for internal
    when should have been checking for external
  • Fixed IsServiceInternal using outdated service object

test/e2e/util_test.go:

  • Add HTTP responce body close in waitForHTTPClientCondition (the main issue)
  • Increased timeout to 10 minutes for verifyInternalIngressController and verifyExternalIngressController
  • Added curl pod restart in case of failure to verifyInternalIngressController
  • Added wait for DNS Resolution in waitForHTTPClientCondition for debuggability

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 20, 2022

@gcs278: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-gcp-operator fd2e991 link false /test e2e-gcp-operator

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@ShudiLi
Copy link
Member

ShudiLi commented Oct 26, 2022

/label qe-approved

@openshift-ci openshift-ci bot added the qe-approved Signifies that QE has signed off on this PR label Oct 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. jira/severity-important Referenced Jira bug's severity is important for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged. qe-approved Signifies that QE has signed off on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants