Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws s3 rm with --recursive option does not delete all the objects from the bucket #8197

Open
anandhu-karattu opened this issue Jul 11, 2024 · 6 comments
Assignees
Labels

Comments

@anandhu-karattu
Copy link

Environment info

Standalone noobaa noobaa-core-5.15.4-20240704.el9.x86_64
Platform RHEL 9.4

Actual behavior

  1. S3 upload is successful for a data set (mix of all type of objects, sparse files, files with long name, compressed files etc)
  2. When download via "aws-alias s3 rm s3://bucket-10909 --recursive" , this still displays the left over directories and objects in the bucket. I checked from the FS that the dataset still exists there.

Attaching noobaa.log with this ticket. There is no error found during upload or download

Time stamps for upload:
Jul 11 15:33:13 —> upload start
Jul 11 15:34:10 --> upload completed

Timestamp for download
Jul 11 15:35:02. --> delete start
Jul 11 15:35:13 --> delete completed

Expected behavior

--recursive option should always delete all the uploaded objects and directories

More information - Screenshots / Logs / Other output

@anandhu-karattu
Copy link
Author

Attaching noobaa.log now
Uploading noobaa.log…

@anandhu-karattu
Copy link
Author

To create the test data, I used an internal tool called populatefs which creates data (files and directories) with all ossible combinations.
Then tried to upload the dataset in to S3 bucket. No error during upload and download found

@romayalon
Copy link
Contributor

romayalon commented Jul 11, 2024

@anandhu-karattu I tried locally with a few objects, a directory and objects in that directory, all the objects were deleted.
Few questions -

  1. s3 ls s3://bucket-10909 still shows the objects?
  2. Did you see any errors coming from the s3 rm command?
  3. The link to the logs is redirecting to the issue itself, can you please check?

@anandhu-karattu
Copy link
Author

anandhu-karattu commented Jul 11, 2024

noobaa.log
@romayalon , Please check this file.
Yes, s3 ls s3://bucket-10909 still shows the objects
I don't see any error from s3 rm command. I will try to collect the console logs as well.

@romayalon
Copy link
Contributor

romayalon commented Jul 14, 2024

Updating that I tried @anandhu-karattu's dataset and indeed I see that some files are not getting deleted.
Seems to be a bug in List Objects.
During the initial investigation, I noticed that -

  1. The first list object response is 1000 objects, IsTruncated:true and a continuation token.
  2. The second list objects command params contains the continuation token from bullet 1 and the response contains 0+ objects, IsTruncated:false. I also noticed that on some cases (maybe there are more issues) the key marker is inside a directory.

@naveenpaul1 naveenpaul1 self-assigned this Aug 19, 2024
@naveenpaul1
Copy link
Contributor

@anandhu-karattu @romayalon issue is with list_object. Because of the uniqe folder structure test data have object listing are falling for data set, and it's affecting the object_delete call, where object delete use the list_object output to get the key. Trying to fix the issue in list_object()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants