-
Notifications
You must be signed in to change notification settings - Fork 590
HDDS-10881. Recursive delete on special volume /tmp fails #6780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@sadanand48, @ashishkumar50 could you please review the changes? Thanks. |
sadanand48
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
@tanvipenumudy could you push an empty commit? CI is not triggered, probably because it was converted from a draft. |
|
Thank you @sadanand48 for the review. It appears that the CI has been triggered now, thanks @nandakumar131. |
|
@tanvipenumudy, why are we running into error when we use |
This is because, we know that:
Let us assume the scenario wherein:
|
|
Marking the PR as draft for now until we come up with a consensus |
|
It seems using ofs we can't access buckets inside tmp volume and so no keys will be visible apart from keys inside auto created hashed bucket for that user. Also ofs doesn't allow to create bucket manually inside tmp volume. |
@ashishkumar50 Are you suggesting that we don't allow "ozone sh" to delete buckets inside tmp? |
If bucket creation is allowed using sh then deletion also should be allowed inside tmp. Only concern here is user1 can't delete files inside tmp of other user2 file using fs command. But same user1 can delete files through sh command of other user2 file. |
|
I have not gone through the code but I remember that sticky bit behaviour was implemented in this feature i.e the hashed buckets should have acls that state only owner can write to it, We should validate if that holds true here and if it does, |
|
If user1 runs below command But if same user1 runs below command Both command looks same using different system but does different work. |
If this is the case then I think it is fine. It should fail for other user hashed bucket. |
Right I agree with your point, I think we can look into why user1 allows deleting user2's bucket if user2's bucket has sticky bit acl. Ideally it shouldn't allow that. |
We should not have such inconsistent behaviour, we can throw error if the user performing the delete doesn't have permission. Here the user is explicitly performing delete operation on the whole If we are allowing same operation via both Object Store API and Filesystem API, we should be consistent. |
|
/pending consensus |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Marking this issue as un-mergeable as requested.
Please use /ready comment when it's resolved.
Please note that the PR will be closed after 21 days of inactivity from now. (But can be re-opened anytime later...)
consensus
|
Thank you very much for the patch. I am closing this PR temporarily as there was no activity recently and it is waiting for response from its author. It doesn't mean that this PR is not important or ignored: feel free to reopen the PR at any time. It only means that attention of committers is not required. We prefer to keep the review queue clean. This ensures PRs in need of review are more visible, which results in faster feedback for all PRs. If you need ANY help to finish this PR, please contact the community on the mailing list or the slack channel." |
What changes were proposed in this pull request?
Recursive delete on the /tmp special volume, as well as its FSO/LEGACY buckets, fails using the ozone shell delete -r -y command with "java.lang.RuntimeException: Failed to clean bucket."
As part of this patch, whenever we encounter the /tmp special volume, we call the
volume.deleteBucket()API and we in turn utilize thebucket.deleteKeys()API to delete individual keys by iterating over the list of keys, instead of going with the currentfs.delete()API over the path.What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-10881
How was this patch tested?
Tested on a local docker-compose cluster.
Volume creation using ozone sh:
Before Changes:
After Changes:
Before Changes:
After Changes:
Volume creation using ozone fs:
Before Changes:
After Changes: