-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mutated cache filter #9342
mutated cache filter #9342
Conversation
return kapierrors.NewConflict(api.Resource("serviceaccount"), staleServiceAccount.Name, fmt.Errorf("cannot add reference to %s based on stale data. decision made for %v,%v, but live version is %v,%v", dockercfgSecretName, staleDockercfgMountableSecrets.List(), staleImageDockercfgPullSecrets.List(), mountableDockercfgSecrets.List(), imageDockercfgPullSecrets.List())) | ||
} | ||
// I saw conflicts popping up. Need to retry at least once, I chose five at random. | ||
for i := 0; i < 5; i++ { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use RetryOnConfilct
3773562
to
5d0c2c0
Compare
[test] |
} else { | ||
// if we had an error it means that we didn't handle it, which means that we want to requeue the work | ||
utilruntime.HandleError(fmt.Errorf("error syncing service, it will be retried: %v", err)) | ||
e.queue.AddRateLimited(key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
max number of retries?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
max number of retries?
Unless we flag permanent failures, we'll just pick up on the next resync. I wasn't prepared to say that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless you put a max number of retries, we'll accumulate requeues for every add/update/resync of a service account with a permanent failure
} else { | ||
// if we had an error it means that we didn't handle it, which means that we want to requeue the work | ||
if e.queue.NumRequeues(key) > MaxRetriesBeforeResync { | ||
e.queue.Forget(key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HandleError here with a message about it not being retried
nit on retry logging, LGTM otherwise |
@Kargakis known flake that you changes will address: https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_integration/2184/consoleFull#-46929419256bf4006e4b05b79524e5923 ? [merge] |
@deads2k maybe so, maybe no, please open a separate issue and assign it to me. |
re[test] flake: #9443 |
yum [merge] |
failed again on yum. want to pick https://github.com/openshift/ose/commit/bae93d0b280a0c7e29dfcf6e829c144dc01e419c before remerging? |
[merge] |
#5448 re[merge] re[test] |
@deads2k are there other controllers that you had as candidate for this cache type? |
Greedy. You think kube is controlled enough to have this as a layer in front of their stores and indexes overall? This is the same trick I used in quota, so clearly that's a candidate. |
#9364 |
Evaluated for origin test up to 78dc03c |
continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5239/) |
continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5239/) (Image: devenv-rhel7_4437) |
#5448 |
Evaluated for origin merge up to 78dc03c |
} | ||
|
||
// worker_inner returns true if the worker thread should continue | ||
func (e *DockercfgController) worker_inner() bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/worker_inner
/innerWorker
or any other camelCase name, please.
Last two commits. This makes it possible to avoid getting back a stale object if you're the client that updated that same object.
@liggitt @derekwaynecarr @smarterclayton I could actually make this a generic filter sitting on top of an indexer used by a
SharedIndexInformer
if we wanted to trust people to ONLY calledMutation
when they've gotten back a confirmed correct object fromUpdate
. There's certainly some utility to such an approach, but it does require some trust.