Skip to content

Conversation

@rfredette
Copy link
Contributor

@rfredette rfredette commented Sep 16, 2022

Determine when the next CRL needs to be updated based on the CRL's nextUpdate field. Re-reconcile at that time in order to pick up the updated CRL.

@openshift-ci openshift-ci bot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. labels Sep 16, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 16, 2022

@rfredette: This pull request references Bugzilla bug 2117524, which is invalid:

  • expected the bug to target the "4.12.0" release, but it targets "---" instead

Comment /bugzilla refresh to re-evaluate validity if changes to the Bugzilla bug are made, or edit the title of this pull request to link to a different bug.

Details

In response to this:

WIP: Bug 2117524: Update CRLs when they expire

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot requested review from Miciah and frobware September 16, 2022 19:06
@rfredette rfredette changed the title WIP: Bug 2117524: Update CRLs when they expire Bug 2117524: Update CRLs when they expire Oct 6, 2022
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 6, 2022
@rfredette
Copy link
Contributor Author

/bugzilla refresh

@openshift-ci openshift-ci bot added bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. and removed bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. labels Oct 6, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 6, 2022

@rfredette: This pull request references Bugzilla bug 2117524, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.12.0) matches configured target release for branch (4.12.0)
  • bug is in the state NEW, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @lihongan

Details

In response to this:

/bugzilla refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot requested a review from lihongan October 6, 2022 20:56
@rfredette
Copy link
Contributor Author

/assign @gcs278
/assign @Miciah

please take a look when you get a chance

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 6, 2022

@rfredette: This pull request references Bugzilla bug 2117524, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.12.0) matches configured target release for branch (4.12.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @lihongan

Details

In response to this:

Bug 2117524: Update CRLs when they expire

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Copy link
Contributor

@gcs278 gcs278 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Look good just question about testing

if nextCRLUpdate, ok := ctx.Value("nextCRLUpdate").(time.Time); ok && !nextCRLUpdate.IsZero() {
log.Info("Requeueing when next CRL expires", "requeue time", nextCRLUpdate.String(), "time until requeue", time.Until(nextCRLUpdate))
//Re-reconcile when any of the CRLs expire
return reconcile.Result{RequeueAfter: time.Until(nextCRLUpdate)}, nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a risk of this requeueing multiple times for the same next CRL update? I.e. on every reconcile, do we queue up this request? Is that okay?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thoughts?

Copy link
Contributor

@gcs278 gcs278 Oct 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thoughts @rfredette?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry it took me a while to get back to you on this one. I think that's a valid concern. I'm not sure at first glance what's the best way to solve it, but let me look into this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gcs278 I've pushed a change that seems to fix this from my manual testing; please take a look when you get a chance and let me know what you think.

I added a global variable to track when the next CRL update is expected to be, and now the reconcile is only triggered when the next update as computed in desiredCRLConfigMap will be sooner than any already pending requeues.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a risk of this requeueing multiple times for the same next CRL update? I.e. on every reconcile, do we queue up this request? Is that okay?

Are you worried that the queue may never be empty? I think it is all right. Is there some other potential problem with the logic as it was when you asked about it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you worried that the queue may never be empty?

I don't think that's what I meant. I just noticed we queue up a request on every reconcile for the same next CRL update. That's not a big problem, just seemed a bit extraneous. We might have 10 or 20 requests that will all trigger at the same time for the same (or immediately after each other) nextCRLUpdate, depending on how many times the CRL was reconciled.

I don't know if there is an easy solution to say "this request is already in the queue, don't queue again". Just pointing it out more than anything.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was able to do a bit more testing on this, and I got the operator to requeue a reconcile for the same time 10 times, but when the time came to do the reconcile, it seems to execute the reconcile twice then stop retrying (at least for a bit; when the new CRL expired 5 minutes later it still re-reconciled). I'm not sure why it's twice; maybe there's a rate limit implemented somewhere, but it doesn't look like this will flood the operator

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think we're all right here. I did a little more investigation:

  • controller-runtime's reconcileHandler calls our Reconcile method (technically, it calls the controller's Reconcile method, but that ultimately results in ours' being called) and requeues the request using AddAfter if the result has nil error and non-zero RequeueAfter:
    result, err := c.Reconcile(ctx, req)
    switch {
    case err != nil:
    c.Queue.AddRateLimited(req)
    ctrlmetrics.ReconcileErrors.WithLabelValues(c.Name).Inc()
    ctrlmetrics.ReconcileTotal.WithLabelValues(c.Name, labelError).Inc()
    log.Error(err, "Reconciler error")
    case result.RequeueAfter > 0:
    // The result.RequeueAfter request will be lost, if it is returned
    // along with a non-nil error. But this is intended as
    // We need to drive to stable reconcile loops before queuing due
    // to result.RequestAfter
    c.Queue.Forget(obj)
    c.Queue.AddAfter(req, result.RequeueAfter)
  • AddAfter adds the item to the queue using Add or sends a waitFor to a channel:
    // AddAfter adds the given item to the work queue after the given delay
    func (q *delayingType) AddAfter(item interface{}, duration time.Duration) {
    // don't add if we're already shutting down
    if q.ShuttingDown() {
    return
    }
    q.metrics.retry()
    // immediately add things with no delay
    if duration <= 0 {
    q.Add(item)
    return
    }
    select {
    case <-q.stopCh:
    // unblock if ShutDown() is called
    case q.waitingForAddCh <- &waitFor{data: item, readyAt: q.clock.Now().Add(duration)}:
  • The queue has a goroutine that reads the waitFor off the channel and adds it to its internal waitingForQueue queue using insert or adds it to the queue using Add:
    case waitEntry := <-q.waitingForAddCh:
    if waitEntry.readyAt.After(q.clock.Now()) {
    insert(waitingForQueue, waitingEntryByData, waitEntry)
    } else {
    q.Add(waitEntry.data)
  • insert checks whether there is an existing entry for the reconcile request; if there is one, it updates the existing entry in the queue instead of pushing a duplicate entry onto the queue:
    // insert adds the entry to the priority queue, or updates the readyAt if it already exists in the queue
    func insert(q *waitForPriorityQueue, knownEntries map[t]*waitFor, entry *waitFor) {
    // if the entry already exists, update the time only if it would cause the item to be queued sooner
    existing, exists := knownEntries[entry.data]
    if exists {
    if existing.readyAt.After(entry.readyAt) {
    existing.readyAt = entry.readyAt
    heap.Fix(q, existing.index)
    }
    return
  • The queue itself defines Add to prevent adding duplicates:
    // Add marks item as needing processing.
    func (q *Type) Add(item interface{}) {
    q.cond.L.Lock()
    defer q.cond.L.Unlock()
    if q.shuttingDown {
    return
    }
    if q.dirty.has(item) {
    return
    }
    q.metrics.add(item)
    q.dirty.insert(item)
    if q.processing.has(item) {
    return
    }
    q.queue = append(q.queue, item)

Between Ryan's testing and my understanding of the queue and delaying_queue logic, I believe that there should be no issue with duplicate items in the queue or the delaying_queue's internal channel or queue.

crlConfigmap.SetOwnerReferences([]metav1.OwnerReference{ownerRef})

return true, &crlConfigmap, nil
return true, &crlConfigmap, context.WithValue(ctx, "nextCRLUpdate", nextCRLUpdate), nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've never seen context used like this, but that could just because we've never needed to. What's the reason behind using context vs. returning a nextCRLUpdatevariable? Is it because it's time/deadline related? Just curious, I don't have an opinion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went back and forth for a while on how to return nextCRLUpdate because it's kind of tangental to the main purpose of desiredCRLConfigMap, and adding that kind of extra return value for callers to manage feels like a bad design choice.

Ideally, nextCRLUpdate would be calculated in its own function to keep the code more clear, but doing that would require all the CRLs to be parsed a second time, and I understand that in certain scenarios, users are running up against the max size of a configmap (1MiB) for the CRL configmap, so parsing again could be pretty wasteful.

I think this is a reasonable compromise, where nextCRLUpdate is available for Reconcile to deal with it, but intermediate functions don't need to worry about it other than to pass the context back up.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I can buy that, sounds like you just want to decouple nextCRLUpdate from the output of desiredCRLConfigMap.

// indicating whether a configmap is desired, the configmap if one is desired,
// the context (containing the next CRL update time as "nextCRLUpdate"), and an
// error if one occurred
func desiredCRLConfigMap(ctx context.Context, ic *operatorv1.IngressController, ownerRef metav1.OwnerReference, clientCAData []byte, crls map[string]*pkix.CertificateList) (bool, *corev1.ConfigMap, context.Context, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed we don't have unit testing for this function. Should we have a unit test or E2E test to verify this CRL update logic? Can we have static CA's in our test code that can trigger some of logical paths?

As far as expiration, I saw in CoreDNS they pass in now as an argument, that way in unit testing they can change now (aka teleport) to the future, to test logical code paths for expiration. This is probably not trivial though...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw in CoreDNS they pass in now as an argument, that way in unit testing they can change now (aka teleport) to the future, to test logical code paths for expiration.

I haven't included tests because I haven't had a good idea on how to automate testing it, since the certificates are time sensitive. My plan is to follow up later with an e2e test that generates certificates and CRLs at test run time, but this is a great idea for unit testing.

@rfredette
Copy link
Contributor Author

cluster bootstrap failed on these tests, but I've since been able to bring up a cluster with clusterbot, so I don't think it's an issue with this PR?

/retest

@brandisher
Copy link
Contributor

/retest

1 similar comment
@rfredette
Copy link
Contributor Author

/retest

@lihongan
Copy link
Contributor

pre-merge test passed, see https://bugzilla.redhat.com/show_bug.cgi?id=2117524#c16

@lihongan
Copy link
Contributor

/label qe-approved

@openshift-ci openshift-ci bot added the qe-approved Signifies that QE has signed off on this PR label Oct 18, 2022
@rfredette
Copy link
Contributor Author

@lihongan I've pushed another change to address one of Grant's comments. It seems fine to me when I test it, but I'd appreciate if you could verify that everything still looks good.

@lihongan
Copy link
Contributor

@rfredette retest the PR with cluster-bot and no issues found, the new CRL can be downloaded and configmap is updated as well. operator logs:

2022-10-25T03:55:02.540Z	INFO	operator.crl	crl/crl_configmap.go:69	retrieving certificate revocation list	{"subject key identifier": "1a7047eeaee3036767b896793d5a89e33d9a4a8c"}
2022-10-25T03:55:02.550Z	INFO	operator.crl	crl/crl_configmap.go:69	new certificate revocation list	{"subject key identifier": "1a7047eeaee3036767b896793d5a89e33d9a4a8c", "next update": "2022-10-25 03:58:36 +0000 UTC"}
2022-10-25T03:58:36.007Z	INFO	operator.crl	crl/crl_configmap.go:69	certificate revocation list has expired	{"subject key identifier": "1a7047eeaee3036767b896793d5a89e33d9a4a8c"}
2022-10-25T03:58:36.007Z	INFO	operator.crl	crl/crl_configmap.go:69	retrieving certificate revocation list	{"subject key identifier": "1a7047eeaee3036767b896793d5a89e33d9a4a8c"}
2022-10-25T03:58:36.015Z	INFO	operator.crl	crl/crl_configmap.go:69	new certificate revocation list	{"subject key identifier": "1a7047eeaee3036767b896793d5a89e33d9a4a8c", "next update": "2022-10-25 04:50:12 +0000 UTC"}


var log = logf.Logger.WithName(controllerName)

var currentCRLUpdate time.Time
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is using a single global variable going to work properly when there are multiple ingresscontrollers with CRLs?

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 27, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Miciah

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 27, 2022
@Miciah
Copy link
Contributor

Miciah commented Oct 27, 2022

e2e-aws-operator failed because TestHostNetworkPortBinding failed. I found one other failure from 12 days ago using search.ci, so this might just be a flaky test with a very low flake rate.
/test e2e-aws-operator

e2e-gcp-vn-serial failed because [sig-instrumentation][Late] Alerts shouldn't report any unexpected alerts in firing or pending state [apigroup:config.openshift.io] [Suite:openshift/conformance/parallel] failed:

{  fail [github.com/onsi/ginkgo/[email protected]/internal/suite.go:612]: Oct 27 01:11:15.780: Unexpected alerts fired or pending after the test run:

alert NoRunningOvnMaster fired for 90 seconds with labels: {namespace="openshift-ovn-kubernetes", severity="critical"}

Also, some disruption tests failed:

{  ingress-to-console-new-connections was unreachable during disruption testing for at least 44s of 2h13m31s (maxAllowed=30s):

}
{  ingress-to-oauth-server-new-connections was unreachable during disruption testing for at least 37s of 2h13m31s (maxAllowed=21s):

}

/test e2e-gcp-ovn-serial

@Miciah
Copy link
Contributor

Miciah commented Oct 27, 2022

/test e2e-azure-operator
/test e2e-gcp-operator

@candita
Copy link
Contributor

candita commented Oct 27, 2022

/assign

@candita
Copy link
Contributor

candita commented Oct 27, 2022

{Operator degraded (NodeInstaller_InstallerPodFailed): NodeInstallerDegraded: 1 nodes are failing on revision 14:

/test e2e-aws-operator

@candita
Copy link
Contributor

candita commented Oct 27, 2022

Oct 27 03:31:38.882 E ns/openshift-ingress-canary pod/ingress-canary-t5pbh node/ci-op-j31fxmni-9dec8-kwjmp-worker-c-mcfzm uid/dc52d4fb-36e4-475a-9af4-a082f90cfc1e container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request

/test e2e-gcp-ovn-serial

@candita
Copy link
Contributor

candita commented Oct 27, 2022

@rfredette this failure may need investigation.

error running backup collection: errors occurred while gathering data:
[skipping gathering secrets/support due to error: secrets "support" not found, skipping gathering endpoints/host-etcd-2 due to error: endpoints "host-etcd-2" not found, skipping gathering sharedconfigmaps.sharedresource.openshift.io due to error: the server doesn't have a resource type "sharedconfigmaps", skipping gathering sharedsecrets.sharedresource.openshift.io due to error: the server doesn't have a resource type "sharedsecrets"]

/test e2e-azure-operator

@rfredette
Copy link
Contributor Author

Cluster setup failed before tests began

level=error msg=Error: creating EC2 Instance: InvalidParameterValue: Value (ci-op-hs93zt1b-43abb-4lmvz-bootstrap-profile) for parameter iamInstanceProfile.name is invalid. Invalid IAM Instance Profile name
level=error msg= status code: 400, request id: e4e29cf0-0c1d-4fed-9ffc-8e9c626bfdcd

/test e2e-aws-operator

@Miciah
Copy link
Contributor

Miciah commented Oct 27, 2022

e2e-gcp-ovn-serial failed because disruption tests failed:

{  cache-kube-api-new-connections was unreachable during disruption testing for at least 5s of 2h10m53s (maxAllowed=4s):

}
{  oauth-api-new-connections was unreachable during disruption testing for at least 5s of 2h10m53s (maxAllowed=3s):

}

/test e2e-gcp-ovn-serial

@Miciah
Copy link
Contributor

Miciah commented Oct 27, 2022

e2e-aws-operator failed because kube-apiserver, kube-controller-manager, and ovnkube-master ran into problems during pod rollouts. Also, must-gather and deprovisoning failed.
Since e2e-azure-operator and e2e-gcp-operator succeeded and e2e-aws-operator failed on known issues that are unrelated to this PR, I'll override e2e-aws-operator.
/override ci/prow/e2e-aws-operator

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 27, 2022

@Miciah: Overrode contexts on behalf of Miciah: ci/prow/e2e-aws-operator

Details

In response to this:

e2e-aws-operator failed because kube-apiserver, kube-controller-manager, and ovnkube-master ran into problems during pod rollouts. Also, must-gather and deprovisoning failed.
Since e2e-azure-operator and e2e-gcp-operator succeeded and e2e-aws-operator failed on known issues that are unrelated to this PR, I'll override e2e-aws-operator.
/override ci/prow/e2e-aws-operator

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Miciah
Copy link
Contributor

Miciah commented Oct 27, 2022

/refresh

@Miciah
Copy link
Contributor

Miciah commented Oct 27, 2022

e2e-gcp-ovn-serial failed because disruption tests failed:

{  ingress-to-oauth-server-new-connections was unreachable during disruption testing for at least 30s of 2h19m31s (maxAllowed=21s):

}
{  ingress-to-console-new-connections was unreachable during disruption testing for at least 39s of 2h19m31s (maxAllowed=30s):

}

/override ci/prow/e2e-gcp-ovn-serial

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 27, 2022

@Miciah: Overrode contexts on behalf of Miciah: ci/prow/e2e-gcp-ovn-serial

Details

In response to this:

e2e-gcp-ovn-serial failed because disruption tests failed:

{  ingress-to-oauth-server-new-connections was unreachable during disruption testing for at least 30s of 2h19m31s (maxAllowed=21s):

}
{  ingress-to-console-new-connections was unreachable during disruption testing for at least 39s of 2h19m31s (maxAllowed=30s):

}

/override ci/prow/e2e-gcp-ovn-serial

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Miciah
Copy link
Contributor

Miciah commented Oct 27, 2022

/refresh

@Miciah
Copy link
Contributor

Miciah commented Oct 27, 2022

/tide refresh

@Miciah
Copy link
Contributor

Miciah commented Oct 27, 2022

/test all
now that #824 has merged.
/test e2e-azure-operator
/test e2e-gcp-operator

@Miciah
Copy link
Contributor

Miciah commented Oct 28, 2022

e2e-aws-operator failed because kube-apiserver reported NodeInstallerProgressing and cluster deprovisioning failed. These are known issues with CI and are not caused by the changes in this PR. Moreover, the e2e-azure-operator and e2e-gcp-operator jobs passed. Thus I am overriding the failed e2e-aws-operator job.
/override ci/prow/e2e-aws-operator

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 28, 2022

@Miciah: Overrode contexts on behalf of Miciah: ci/prow/e2e-aws-operator

Details

In response to this:

e2e-aws-operator failed because kube-apiserver reported NodeInstallerProgressing and deprovisioning failed. These are known issues with Ci and are not caused by the changes in this PR. Moreover, the e2e-azure-operator and e2e-gcp-operator jobs passed. Thus I am overriding the failed e2e-aws-operator job.
/override ci/prow/e2e-aws-operator

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Miciah
Copy link
Contributor

Miciah commented Oct 28, 2022

e2e-gcp-ovn-serial failed because disruption tests failed. Similar failures have been observed in many other PRs, so I believe that the failures are not related to the changes in this PR.
/override ci/prow/e2e-gcp-ovn-serial

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 28, 2022

@Miciah: Overrode contexts on behalf of Miciah: ci/prow/e2e-gcp-ovn-serial

Details

In response to this:

e2e-gcp-ovn-serial failed because disruption tests failed. Similar failures have been observed in many other PRs, so I believe that the failures are not related to the changes in this PR.
/override ci/prow/e2e-gcp-ovn-serial

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 28, 2022

@rfredette: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-robot openshift-merge-robot merged commit 022fcdc into openshift:master Oct 28, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 28, 2022

@rfredette: All pull requests linked via external trackers have merged:

Bugzilla bug 2117524 has been moved to the MODIFIED state.

Details

In response to this:

Bug 2117524: Update CRLs when they expire

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@rfredette
Copy link
Contributor Author

/cherry-pick release-4.11

@openshift-cherrypick-robot

@rfredette: new pull request created: #853

Details

In response to this:

/cherry-pick release-4.11

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged. qe-approved Signifies that QE has signed off on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants