Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple mount on the same mount point #2840

Closed
cat0dog opened this issue Oct 10, 2022 · 3 comments · Fixed by #2979
Closed

Multiple mount on the same mount point #2840

cat0dog opened this issue Oct 10, 2022 · 3 comments · Fixed by #2979
Assignees
Labels
priority/low The priority lower than normal

Comments

@cat0dog
Copy link

cat0dog commented Oct 10, 2022

What happened:
After setting the juicefs file system to mount automatically on boot, every time you execute mount -a manually, there will be one more juicefs mount processes, corresponding to X umount
9f1bbb266ecd116674117968202897e

However, when NFS automounts are recorded in /etc/fstab, no matter how many times mount -a is executed, there is only 1 NFS mounted process, corresponding to 1 umount

What you expected to happen:
mount -a will not mount an existing juicefs file system repeatedly

How to reproduce it (as minimally and precisely as possible):
Set JuiceFS to mount automatically on boot, then repeat mount -a several times

Anything else we need to know?

Environment:

  • JuiceFS version (use juicefs --version) or Hadoop Java SDK version:
juicefs version 1.0.0+2022-08-08.cf0c269
  • OS (e.g cat /etc/os-release):
PRETTY_NAME="Ubuntu 22.04 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
  • Kernel (e.g. uname -a):
Linux node75 5.15.0-33-generic #34-Ubuntu SMP Wed May 18 13:34:26 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
@cat0dog cat0dog added the kind/bug Something isn't working label Oct 10, 2022
@cat0dog
Copy link
Author

cat0dog commented Oct 10, 2022

And after executing 1 umount, it will cause the client monitoring data of this node to be lost

@SandyXSD SandyXSD self-assigned this Oct 10, 2022
@SandyXSD
Copy link
Contributor

mount -a will compare the entry in fstab and mountinfo (/proc/self/mountinfo or /run/mount/utab), and ignore already mounted file systems. JuiceFS use META-URL as the first argument in fstab, and volume name as the FsName of FUSE, while both of them are considered to be SRC in mount commands. Since they are usually different, so whenever mount -a is executed it will always call juicefs mount.
There is no plan to change this behavior of JuiceFS (META-URL is not a good option to display as FsName because it's usually pretty long), so for now we may just leave this issue as it is.

@SandyXSD SandyXSD added the priority/low The priority lower than normal label Oct 14, 2022
@SandyXSD
Copy link
Contributor

If this issue is really a concern to you, you can try recording the META-URL and setting it as the FsName here: https://github.com/juicedata/juicefs/blob/release-1.0/pkg/fuse/fuse.go#L434

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/low The priority lower than normal
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants