-
Notifications
You must be signed in to change notification settings - Fork 997
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creating a lot of hardlinks can cause the filesystem to become broken. #3689
Comments
It's probably caused by the trash feature. The value of You may try the test again with trash disabled by |
@SandyXSD, thank you for your reply. Also, does my usage align with the motivation behind the hard link design/implementation? |
It sounds like a pretty special case as few user creates that many hardlinks in such a short time. From the result we got previously, it's possible to create 1K+ hardlinks within 1 second when using Redis as the metadata engine. |
The EIO is fixed by #3706 |
Awesome work! I'm glad you fixed the issue. Do you know when the next PATCH release is? I hope it's not too far away. |
v1.0.5 should be released on July or August, if there's no critical bug found. There is a minor release v1.1-beta on the way though, which contains this fix as well and will probably be published next week. |
What happened:
Creating a lot of hardlinks can cause the filesystem to become broken.
What you expected to happen:
Hardlinks work fine.
How to reproduce it (as minimally and precisely as possible):
Save this into a
reproduce.js
file and run it.Then we can see that the inode has an incorrect count.
Then, if you re-run this script, it will cause an error and the current folder will become an illegal state. The ls command will not respond, and attempting to use rm or unlink will result in an error.
However, it seems that the other fold has no effect on this illegal state.
Is this a bug or is there a limitation due to the implementation or meta engine?
In my scenario, I want to create 10,000 hard links in 1 second. Is it possible to do so in JuiceFS?
Anything else we need to know?
Environment:
juicefs --version
) or Hadoop Java SDK version: juicefs version 1.0.0+2022-08-08.cf0c269bcat /etc/os-release
): Ubuntu 22.04.2 LTSuname -a
): 4.19.91-26.6.al7.x86_64The text was updated successfully, but these errors were encountered: