-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Files unexpectedly released leading to Bad file descriptor
#749
Comments
Thanks for the details on this, @jchorl. We'll look into this soon. Quickly linking to another comment we saw a similar symptom in another issue: #706 (comment) |
Thanks @dannycjones, just a +1 in that we are also observing this. I was unable to reproduce it succinctly so kudos to @jchorl. The files in question were also bash scripts and our workload is also very heavy. I circumvented the issue in our case by copying the files from S3 (via mountpoint) to EBS such that I wouldn't have to read them from mountpoint during execution but this remains an open issue for us. |
Thanks again for the details on this report. The regression was introduced in 0030b0a and it is indeed the We have two simple reproductions (thanks @jamesbornholt and @passaro): One in Python... import os
import sys
# open the file and duplicate the file descriptor
path = sys.argv[1]
fd = os.open(path, os.O_RDONLY)
fd2 = os.dup(fd)
# read from the first file descriptor
b = os.read(fd, 10)
print(f"read from fd: {b}")
# close the first file descriptor -- triggers a fuse FLUSH
os.close(fd)
# now try reading from the second file descriptor
os.lseek(fd2, 100000, os.SEEK_SET)
b = os.read(fd2, 10)
print(f"read from fd2: {b}")
os.close(fd2) And the other in Rust as a new integration test to be added: https://github.com/passaro/mountpoint-s3/blob/17c245dbd527815be6659232af97765c7cee07ea/mountpoint-s3/tests/fuse_tests/read_test.rs#L313-L335 |
We've released v1.4.1 which contains a fix for this: https://github.com/awslabs/mountpoint-s3/releases/tag/mountpoint-s3-1.4.1 Please let us know @jchorl if you're still seeing the issue, otherwise I hope we can resolve it. |
Thank you for the speedy reproduction and fix. I just ran a test that was repeatedly failing and it passed, so I think we're all good here. Thanks again! |
Mountpoint for Amazon S3 version
1.4.0
AWS Region
ca-central-1
Describe the running environment
EC2, amazon linux 2. Mountpoint is running inside a privileged ubuntu docker container.
Mountpoint options
What happened?
Summary:
Bad file descriptor
errorsThe setup is a bit complicated. I'll put the files at the end because some are a bit large.
my-bucket/my-prefix/
head -c 12G </dev/urandom > ~/big-file
to generate.It's important to note I was running on a
r6i.large
, with agp3
vol, 250gb, 3000 iops, 125 throughput.Here's how I ran it:
In one terminal, get mountpoint running:
In a separate terminal, exec in to the container and run the script
Now for some observations:
git bisect
to 0030b0aHere is my hypothesis as to what's going on:
Closed
status for the file-handle, after which reading no longer worksFLUSH
, closing the file, even though the bash script will be returned to. I'm sure the nested bash call doesn't make things easier!I'm not sure if this is a minimal reproduction, but I worked hard to get it into a state so hopefully you can run it too and provide guidance. You may be able to reproduce this behaviour using
O_DIRECT
or something easier.Scripts used for testing/reproduction:
[0]:
[1]:
Relevant log output
No response
The text was updated successfully, but these errors were encountered: