-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Revert "pkg/destroy: data/aws: delete undiscoverable AWS objects" #2461
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert "pkg/destroy: data/aws: delete undiscoverable AWS objects" #2461
Conversation
This reverts commit 6ae7598, openshift#1268. That was a workaround to recover from openshift-dev clusters where an in-house pruner is removing instances but not their associated instance profiles. Folks using the installer's destroy code won't need it, and while the risk of accidental name collision is low, I don't think it's worth taking that risk. With this commit, folks using external reapers are responsible for ensuring that they reap instance profiles when they reap instances, and we get deletion logic that is easier to explain to folks mixing multiple clusters in the same account.
ff166f9 to
7a0e106
Compare
|
/cc @eparis /approve |
|
This looks good to me... I defer to @eparis per @abhinavdahiya request above. |
|
Since we now include part of the clusterid in the name the original problem this solved is less painful (the original problem was that it was impossible to install a cluster and extremely difficult to get out of the situation.) While I believe this patch is the wrong thing to do I won't hold it up. I don't see a reason that the risk of leaving this code would be any different given vpc re-use (which was the reason I was told this was being considered.) We know for a fact that this code cleans up resources for 100's of people in the world. I, at least, know of no problem this code causes. But if the team believes what's best for our users is to leak resources I don't have the fight in me to disagree. |
hmm, yeah looking at this more closely i skipped the part of clusterid prefix , we are only deleting resources prefixed with clusterID, which is unique in shared-vpc case too. So I think i agree with @eparis that this deletion is fine to continue to keep unless we get bugs or requests from users. /approve cancel |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: wking The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@wking: PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
@wking: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
/close I'm still not excited about explaining this in docs, but we can always re-open if/when some else prompts us to simplify. |
|
@wking: Closed this PR. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This reverts commit 6ae7598, #1268.
That was a workaround to recover from openshift-dev clusters where an in-house pruner is removing instances but not their associated instance profiles. Folks using the installer's destroy code won't need it, and while the risk of accidental name collision is low, I don't think it's worth taking that risk. With this commit, folks using external reapers are responsible for ensuring that they reap instance profiles when they reap instances, and we get deletion logic that is easier to explain to folks mixing multiple clusters in the same account.