Skip to content

Conversation

@mshitrit
Copy link
Contributor

@mshitrit mshitrit commented Feb 7, 2021

Back porting Machine Health Check external remediation template logic.
Feature Design: here

@mshitrit
Copy link
Contributor Author

mshitrit commented Feb 7, 2021

/retest

1 similar comment
@mshitrit
Copy link
Contributor Author

mshitrit commented Feb 8, 2021

/retest

@beekhof
Copy link
Contributor

beekhof commented Feb 9, 2021

/test e2e-azure-operator

Copy link
Contributor

@beekhof beekhof left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good progress, but lets not introduce any changes we don't need to.
My major concern is the changes to existing API fields

Comment on lines -82 to -97
// InvalidConfigurationClusterError indicates that the cluster
// configuration is invalid.
InvalidConfigurationClusterError ClusterStatusError = "InvalidConfiguration"

// UnsupportedChangeClusterError indicates that the cluster
// spec has been updated in an unsupported way. That cannot be
// reconciled.
UnsupportedChangeClusterError ClusterStatusError = "UnsupportedChange"

// CreateClusterError indicates that an error was encountered
// when trying to create the cluster.
CreateClusterError ClusterStatusError = "CreateError"

// UpdateClusterError indicates that an error was encountered
// when trying to update the cluster.
UpdateClusterError ClusterStatusError = "UpdateError"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean to remove these?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep.
They weren't being used in MAO so I assumed it is a pretty safe bet to remove them.
Of course I could be wrong here - so looking for cloud team input.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these changes should at least belong to a separate commit or a different PR. Agree - we don't use these values, we don't have and planning to have cluster resource in MAPI, yet it makes hard to approve such change.

@beekhof
Copy link
Contributor

beekhof commented Feb 9, 2021

/test e2e-vsphere

Comment on lines 42 to 72
func NewExternalRemediationTemplate() *unstructured.Unstructured {

// Create remediation template resource.
infraRemediationResource := map[string]interface{}{
"kind": "InfrastructureRemediation",
"apiVersion": "infrastructure.machine.openshift.io/v1alpha3",
"metadata": map[string]interface{}{},
"spec": map[string]interface{}{
"size": "3xlarge",
},
}
infraRemediationTmpl := &unstructured.Unstructured{
Object: map[string]interface{}{
"spec": map[string]interface{}{
"template": infraRemediationResource,
},
},
}
infraRemediationTmpl.SetKind("InfrastructureRemediationTemplate")
infraRemediationTmpl.SetAPIVersion("infrastructure.machine.openshift.io/v1alpha3")
infraRemediationTmpl.SetGenerateName("remediation-template-name-")
infraRemediationTmpl.SetNamespace(Namespace)

return infraRemediationTmpl
}

func NewExternalRemediationMachine() *unstructured.Unstructured {
return &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": "InfrastructureRemediation",
"apiVersion": "infrastructure.machine.openshift.io/v1alpha3",
"metadata": map[string]interface{}{
"name": "Machine",
"namespace": Namespace,
},
"spec": map[string]interface{}{
"size": "3xlarge",
},
},
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not used anywhere. Please add tests which will show the functionality.

Copy link
Contributor Author

@mshitrit mshitrit Mar 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't follow.
Those methods are used in pkg/controller/machinehealthcheck/machinehealthcheck_controller_test.go:461

//external remediation machine template crd
erTemplate := maotesting.NewExternalRemediationTemplate()
mhcWithRemediationTemplate := newMachineHealthCheckWithRemediationTemplate(erTemplate)
erm := maotesting.NewExternalRemediationMachine()```
 

// +kubebuilder:validation:Type:=string
NodeStartupTimeout metav1.Duration `json:"nodeStartupTimeout,omitempty"`

// RemediationTemplate is a reference to a remediation template

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure this does not require a whole new set of RBAC permissions on our side?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will the controller be able to create new objects from the template? Who is going to be adding the appropriate RBAC? Is that going to be part of the default set of RBAC or will another component grant permission to the MHC service account in their own way?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was no change in this logic from upstream.
In a nutshell the existence of a template means that the proper CRD exist so a CR (EMR) can be created.
If you want to read more, here is the upstream md file

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just because a CRD exists, doesn't mean a user has permission to create resources 😉 Something will need to give the service account for the Machine Health Check controller permissions to be able to create these resources, it could be that we just add those to the defaults (which I think is perfectly acceptable), but to do this we need to know ahead of time all of the types that could possible be referred to here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand.
I actually had to do it on my e2e test.
However I'm not sure if it is something we can know ahead - since I'm pretty sure that the CRDs are defined by the third party (external).
It might be something that needs to be updated after coordination with the external user.
/cc @n1r1 what do you think ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be the downside to implement the suggestion I made?

IMO forcing all external remediations into a single API group will limit us severely. This means that only operators built by OpenShift and maintained by OpenShift would be able to be used, ie, no community remediation templates, if that's not an issue then sure, we can go down this route

If it's more, perhaps the operators responsible for those external remediation strategies should be responsible for adding a binding for the service account?

I think my preference would be this kind of approach, it would be consistent across both openshift and community remediation methods at least

From my point of view, external remediation is a generic infrastructure. It is not tied to a specific platform.
I prefer to have a complete decoupling between MHC and the external remediation controller.
There's a known api, and whoever wants can just implement it.

Yes I appreciate that and definitely want it to be generic, but that doesn't fit RBAC very well, so we need to come up with some solution of allowing the MHC permission for the groups that are appropriate.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be the downside to implement the suggestion I made?

IMO forcing all external remediations into a single API group will limit us severely. This means that only operators built by OpenShift and maintained by OpenShift would be able to be used, ie, no community remediation templates, if that's not an issue then sure, we can go down this route

Good point.
We could take an hybrid approach, where openshift remediators can be seamlessly added while community operators would need to do some extra step (either patch the SA or update the RBAC in this repo)

If it's more, perhaps the operators responsible for those external remediation strategies should be responsible for adding a binding for the service account?

I think my preference would be this kind of approach, it would be consistent across both openshift and community remediation methods at least

The service account could be different in different distributions, isn't it? for example, MHC SA could be ocp-mhc-sa and in CAPI based clusters it could be capi-mhc-sa, so the remediator will need to patch all "known" SAs.

Also, I'm not sure if someone will be able to remember that changing MHC service account breaks other external components.

Maybe remediators should expose some config value. it could be either the apiGroup to be used or the ServiceAccount to be patched.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a new idea came up to my mind - what about just having a non openshift api group which is specifically for external remediation?
for example external-remediation.io. We could propose that upstream as well, such that community remediators could work the same on OCP and upstream, by using that group.

WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just wanted to add another idea here. we could create the default build with a tightly focused api group for the external remediation template, but give the user clear instructions on how to override/expand that initial group. basically as a day 2 operation they could create their own special EMRs and load those in, but by default it would work "out of the box" with a reasonable setting.

We could take an hybrid approach, where openshift remediators can be seamlessly added while community operators would need to do some extra step (either patch the SA or update the RBAC in this repo)

+1, i think this is saying something similar

this situation is also similar to how we have modified the cluster-api provider in the cluster-autoscaler. it's a different problem there, but perhaps a similar solution. we compile the autoscaler to know about our resrouce groups, but it is possible for the autoscaler to be reconfigured by environment variable to use a different api group. this allows us to share code with upstream capi and also keep our openshift types for when we deploy.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @elmiko

We discussed this issue with the CAPI community, and CAPI is already using RBAC aggregated label which allows external components to add their RBAC rules.

I believe this is an elegant way to solve this problem and it's always good to be aligned with upstream.


errList = r.remediate(ctx, needRemediationTargets, errList, mhc)
// deletes External Machine Remediation for healthy machines - indicating remediation was successful
r.cleanEMR(ctx, currentHealthy, mhc)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should belong entirely to externalRemediation logic block, so the default internal remediation flow is unchanged.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I defiantly see your point, but I'm not sure the alternative is better.

The major thing I'm bothered with is that it'll will force us to add currentHealthy to the remediate method input params - which does not make a lot of sense to me.

Another minor issue - currently externalRemediation and internalRemediation takes a single target as input - this change will also cause a refactoring for those method to take the whole slice.

I don't have a strong opinion either way - but I thought it worth bringing these things up before making the change.
Let me know what you think.

Copy link
Contributor

@JoelSpeed JoelSpeed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't given this a thorough review yet, but I've added a few comments.
While I appreciate the effort to clean up the code, please either remove cosmetic only changes or put them into a separate commit. Currently it is very hard to review this as I can't tell what has genuinely change vs what has just been renamed or moved

// +kubebuilder:validation:Type:=string
NodeStartupTimeout metav1.Duration `json:"nodeStartupTimeout,omitempty"`

// RemediationTemplate is a reference to a remediation template
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will the controller be able to create new objects from the template? Who is going to be adding the appropriate RBAC? Is that going to be part of the default set of RBAC or will another component grant permission to the MHC service account in their own way?

@mshitrit
Copy link
Contributor Author

mshitrit commented Mar 4, 2021

/retest

2 similar comments
@elmiko
Copy link
Contributor

elmiko commented Mar 30, 2021

/retest

@n1r1
Copy link
Contributor

n1r1 commented Apr 1, 2021

/retest

Copy link
Contributor

@elmiko elmiko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this generally makes sense to me, i have a few questions just to make sure i'm not missing anything.

totalTargets,
mhc.Spec.MaxUnhealthy,
totalTargets-currentHealthy,
totalTargets-healthyCount,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we just use unhealthyCount here?

}

func (r *ReconcileMachineHealthCheck) externalRemediation(ctx context.Context, m *mapiv1.MachineHealthCheck, t target) error {
klog.Infof(" %s: start external remediation logic", t.string())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems like a useful debug statement, i wonder if we shouldn't increase the log level for this? (eg klog.V(3))

my concern here is that it might be seen as adding more noise to the logs at info level, not a blocker just a thought.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are right - I'll change it 👍

if to.GetKind() == "" {
to.SetKind(strings.TrimSuffix(in.Template.GetKind(), TemplateSuffix))
}
return to, nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do the Labels from GenerateTemplateInput need to be applied in here somewhere?

Copy link
Contributor Author

@mshitrit mshitrit Apr 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a very good point.
From what I can tell Labels are not relevant for either GenerateTemplateInput or for the actual EMR (which is created here), so I don't know if they should be copied or not.
IMO we should remove them altogether to avoid confusion.
@n1r1 what do you think, Is there a purpose for the labels here that I'm missing ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any immediate need for that but I can understand where it comes from and why someone might expect that.
I'm not sure what's the correct behavior here but for the sake of being consistent with upstream implementation (which doesn't add labels AFAIU), I would keep at as is.

If we think this should be changed we might want to do that upstream first.

@mshitrit mshitrit force-pushed the 4019-Fencing__Backport_MHC_external_remediation_template branch from 3634f3e to 5709b1b Compare April 4, 2021 19:42
Copy link
Contributor

@JoelSpeed JoelSpeed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've given this a review and am happy to merge, apart from the tests. Can you please take a look at my comment and fix up the subtest logic and then we can get this merged.

Thanks @mshitrit

Comment on lines 450 to 452
recorder := record.NewFakeRecorder(2)
r := newFakeReconcilerWithCustomRecorder(recorder, buildRunTimeObjects(tc)...)
assertBaseReconcile(t, tc, ctx, r)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This (and the other loops in this file) still need to be wrapped in a t.Run otherwise there is only one test, not a parent with children. If we do it like this, the first test failure will fail the whole table, and it will be hard to work out exactly which one.

Please restore the t.Run here and in the external remediation test too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @JoelSpeed,
Thanks Great comment 👍
As far as I could see the other loop was already ok so I I've only changed the one you've commented about.

		t.Run(tc.name, func(t *testing.T) {
			recorder := record.NewFakeRecorder(2)
			r := newFakeReconcilerWithCustomRecorder(recorder, buildRunTimeObjects(tc)...)
			assertBaseReconcile(t, tc, ctx, r)
			//assertExternalRemediation
			assertExternalRemediation(t, tc, ctx, r)

		})
	}```

@beekhof
Copy link
Contributor

beekhof commented Apr 7, 2021

Looks like the test failures are unrelated...

Cluster operator authentication Degraded is True with IngressStateEndpoints_MissingSubsets::OAuthClientsController_SyncError::OAuthServerDeployment_PreconditionNotFulfilled::OAuthServerRouteEndpointAccessibleController_SyncError::OAuthServerServiceEndpointAccessibleController_SyncError::OAuthServerServiceEndpointsEndpointAccessibleController_SyncError::Route_InvalidCanonicalHost::WellKnownReadyController_SyncError: IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

/retest

@mshitrit mshitrit force-pushed the 4019-Fencing__Backport_MHC_external_remediation_template branch from 5709b1b to 0c8fa6e Compare April 7, 2021 10:41
recorder := record.NewFakeRecorder(2)
r := newFakeReconcilerWithCustomRecorder(recorder, buildRunTimeObjects(tc)...)
assertBaseReconcile(t, tc, ctx, r)
//assertExternalRemediation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's this comment? Do we need this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope. it's quite redundant.
I've removed it.

@mshitrit mshitrit force-pushed the 4019-Fencing__Backport_MHC_external_remediation_template branch from 0c8fa6e to fa7e6e8 Compare April 7, 2021 12:15
Copy link
Contributor

@JoelSpeed JoelSpeed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 7, 2021
Copy link
Contributor

@elmiko elmiko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @mshitrit
/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 7, 2021
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@beekhof
Copy link
Contributor

beekhof commented Apr 7, 2021

/retest

2 similar comments
@n1r1
Copy link
Contributor

n1r1 commented Apr 8, 2021

/retest

@mshitrit
Copy link
Contributor Author

mshitrit commented Apr 8, 2021

/retest

@JoelSpeed
Copy link
Contributor

@openshift-ci-robot openshift-ci-robot removed the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 8, 2021
wking added a commit to wking/cluster-version-operator that referenced this pull request Apr 8, 2021
This wasn't covered when ClusterRole reconciliation landed in
697cbf6 (lib: update resource{read,merge,apply} to add new objects,
2018-08-20, openshift#10), but the property existed even back then:

  $ git grep -A20 'type ClusterRole struct' 697cbf6 | grep '[-][[:space:]]Aggrega
  tionRule'
  697cbf6:vendor/k8s.io/api/rbac/v1/types.go-  AggregationRule *AggregationRule `json:"aggregationRule,omitempty" protobuf:"bytes,3,opt,name=aggregationRule"`
  697cbf6:vendor/k8s.io/api/rbac/v1alpha1/types.go-    AggregationRule *AggregationRule `json:"aggregationRule,omitempty" protobuf:"bytes,3,opt,name=aggregationRule"`
  697cbf6:vendor/k8s.io/api/rbac/v1beta1/types.go-     AggregationRule *AggregationRule `json:"aggregationRule,omitempty" protobuf:"bytes,3,opt,name=aggregationRule"`

and now folks want to use aggregationRule for something [1].

[1]: openshift/machine-api-operator#795

 // EnsureRoleBinding ensures that the existing matches the required.
diff --git a/lib/resourcemerge/rbacv1beta1.go b/lib/resourcemerge/rbacv1beta1.go
index d9e84d5..fa3cf17 100644
--- a/lib/resourcemerge/rbacv1beta1.go
+++ b/lib/resourcemerge/rbacv1beta1.go
@@ -27,6 +27,10 @@ func EnsureClusterRolev1beta1(modified *bool, existing *rbacv1beta1.ClusterRole,
 		*modified = true
 		existing.Rules = required.Rules
 	}
+	if !equality.Semantic.DeepEqual(existing.AggregationRule, required.AggregationRule) {
+		*modified = true
+		existing.AggregationRule = required.AggregationRule
+	}
 }

 // EnsureRoleBindingv1beta1 ensures that the existing matches the required.
wking added a commit to wking/cluster-version-operator that referenced this pull request Apr 8, 2021
This wasn't covered when ClusterRole reconciliation landed in
697cbf6 (lib: update resource{read,merge,apply} to add new objects,
2018-08-20, openshift#10), but the property existed even back then:

  $ git grep -A20 'type ClusterRole struct' 697cbf6 | grep '[-][[:space:]]Aggrega
  tionRule'
  697cbf6:vendor/k8s.io/api/rbac/v1/types.go-  AggregationRule *AggregationRule `json:"aggregationRule,omitempty" protobuf:"bytes,3,opt,name=aggregationRule"`
  697cbf6:vendor/k8s.io/api/rbac/v1alpha1/types.go-    AggregationRule *AggregationRule `json:"aggregationRule,omitempty" protobuf:"bytes,3,opt,name=aggregationRule"`
  697cbf6:vendor/k8s.io/api/rbac/v1beta1/types.go-     AggregationRule *AggregationRule `json:"aggregationRule,omitempty" protobuf:"bytes,3,opt,name=aggregationRule"`

and now folks want to use aggregationRule for something [1].

[1]: openshift/machine-api-operator#795
@mshitrit mshitrit force-pushed the 4019-Fencing__Backport_MHC_external_remediation_template branch from fa7e6e8 to 7d2d223 Compare April 8, 2021 18:14
@openshift-ci-robot openshift-ci-robot removed the lgtm Indicates that a PR is ready to be merged. label Apr 8, 2021
Copy link
Contributor

@JoelSpeed JoelSpeed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: JoelSpeed

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 8, 2021
Copy link
Contributor

@elmiko elmiko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 8, 2021
@mshitrit
Copy link
Contributor Author

mshitrit commented Apr 8, 2021

/retest

3 similar comments
@JoelSpeed
Copy link
Contributor

/retest

@beekhof
Copy link
Contributor

beekhof commented Apr 9, 2021

/retest

@JoelSpeed
Copy link
Contributor

/retest

@JoelSpeed
Copy link
Contributor

/override ci/prow/e2e-aws-upgrade

Test failures seem to be unrelated to the changes in this PR

@openshift-ci-robot
Copy link
Contributor

@JoelSpeed: Overrode contexts on behalf of JoelSpeed: ci/prow/e2e-aws-upgrade

Details

In response to this:

/override ci/prow/e2e-aws-upgrade

Test failures seem to be unrelated to the changes in this PR

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mshitrit mshitrit force-pushed the 4019-Fencing__Backport_MHC_external_remediation_template branch from 7d2d223 to 338eab5 Compare April 9, 2021 19:11
@openshift-ci-robot openshift-ci-robot removed the lgtm Indicates that a PR is ready to be merged. label Apr 9, 2021
@elmiko
Copy link
Contributor

elmiko commented Apr 9, 2021

i left a comment on the other PR, #846 (review)

@mshitrit
Copy link
Contributor Author

mshitrit commented Apr 9, 2021

/retest

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Apr 9, 2021

@mshitrit: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-aws-workers-rhel7 6fbecec link /test e2e-aws-workers-rhel7
ci/prow/e2e-azure 338eab5 link /test e2e-azure
ci/prow/e2e-vsphere 338eab5 link /test e2e-vsphere
ci/prow/e2e-vsphere-upgrade 338eab5 link /test e2e-vsphere-upgrade
ci/prow/e2e-gcp 338eab5 link /test e2e-gcp
ci/prow/e2e-gcp-operator 338eab5 link /test e2e-gcp-operator
ci/prow/e2e-libvirt 338eab5 link /test e2e-libvirt

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@elmiko
Copy link
Contributor

elmiko commented Apr 9, 2021

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 9, 2021
@openshift-merge-robot openshift-merge-robot merged commit 2f3db03 into openshift:master Apr 9, 2021
@sttts
Copy link

sttts commented Apr 12, 2021

@mshitrit mshitrit deleted the 4019-Fencing__Backport_MHC_external_remediation_template branch May 23, 2021 10:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants