-
Notifications
You must be signed in to change notification settings - Fork 2.1k
ci-operator/jobs/infra-periodics: Opt installer out of periodic-retester #1832
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Currently things like [1,2] that try to unstick us vs. some external change we need to /hold the other approved PRs to get them out of the merge queue while the fix goes in. With the bot removed from our repository, those PRs would remove themselves as they failed naturally, and we'd just /retest them after the fix lands. We can turn the bot back on once we got back to one-external-workaround a week or so, vs. our current several per day ;). Docs for the repo: syntax are in [3]. [1]: openshift/installer#415 [2]: openshift/installer#425 [3]: https://help.github.com/articles/searching-issues-and-pull-requests/#search-within-a-users-or-organizations-repositories
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: wking If they are not already assigned, you can assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
NACK /hold If you want something to jump the queue, just push the green button. With batching, I can't imagine your queue would ever be longer than a batch or two. This really feels like a bandaid for a symptom of a larger problem that's better solved at the root. |
This is not an option. Even when everything but Tide is green, I see:
Elevating my permissions to avoid that would be even more of a cludge than this.
Say you have PRs A, B, and C in flight, with C fixing some issue. I don't want Tide waiting on the likely-to-fail A and B retests while it collects a batch. I just want C landed as quickly as possible. But I'm not clear on how tide collects batches, are there docs on that somewhere? Another issue is that the robot doesn't look into the failure before retesting. This means it takes longer to surface breakage like "some operator pushed a broken image" while the bot blindly retests a PR that's just tweaking installer docs (like this). |
Your team lead and group lead should both have write access to the repo. That is the intended way to merge changes quickly.
|
|
Said another way, |
|
@smarterclayton we were supposed to have ratcheting dependencies. Please let's not recreate what |
Currently things like openshift/installer#415 and openshift/installer#425 that try to unstick us vs. some external change we need to
/holdthe other approved PRs to get them out of the merge queue while the fix goes in. With the bot removed from our repository, those PRs would remove themselves as they failed naturally, and we'd just/retestthem after the fix lands. We can turn the bot back on once we got back to one-external-workaround a week or so, vs. our current several per day ;).Docs for the repo: syntax are here.
/assign @smarterclayton