-
-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ctrl-T command (__fzf_select__) never finishes, consumes all RAM, by default on macos #2705
Comments
After some exploration, I traced the issue to
If I run that command and redirect to a file, the command runs indefinitely as the file grows to many gigabytes. So it seems like the likely culprit. Skimming the file I see it often contains entries like:
Where the paths after It turns out that this is related to the macOS sandboxing security feature that has brought so much effort and drama to the OS recently. Applications that want access to local files must request permission from the user. If granted, it seems the application still doesn't access those files directly, but via symlinks placed into their To resolve this, I considered adding another pattern to the Instead I tried just removing the Now when I run the command, I get a list of 1.5 million files in about a minute, and no excessive RAM usage. The choice to include In conclusion, I think removing |
Thanks for the report, that's unfortunate. find of macOS and GNU find both can detect filesystem loops, and the latter specifically prints diagnostic messages like so brew install findutils
gfind -L ~/Library/Containers > /dev/null
# gfind: File system loop detected; ‘/Users/jg/Library/Containers/com.apple.CloudPhotosConfiguration/...’.
# gfind: File system loop detected; ‘/Users/jg/Library/Containers/com.apple.ScreenSaver.Engine.legacyScreenSaver/...’.
# ... So, they will eventually stop at some point. find ~/Library/Containers | wc -l
# 89303
find -L ~/Library/Containers | wc -l
# 3809469
gfind -L ~/Library/Containers | wc -l
# 3809072 But still, this is extremely wasteful. Removing I'll leave this issue open. Let's hear what other users think. For the time being, you might want to set up |
Hopefully late feedback is better than none 😅 I've also been bitten by this. Typical uses for using fzf from the home directory for me are using Alt-C to quickly jump to a subfolder or wanting to search all my repositories at once. Removing That said, nine years is a lot of precedent to overturn. I like @hraftery's idea of excluding these files based on macOS properties. Specifically, the find -L -x * \( -name '.*' -o -xattrname com.apple.containermanager.uuid \) -prune \
-o -type f -print 2> /dev/null Happy to write up a PR if there's interest. |
I have some numbers. From my home directory currently: ~ % time find -L . -mindepth 1 \( -path '*/.*' -o -fstype 'sysfs' -o -fstype 'devfs' -o -fstype 'devtmpfs' -o -fstype 'proc' \) -prune -o -type f -print -o -type d -print -o -type l -print 2> /dev/null | wc -l
10156090
find -L . -mindepth 1 \( -path '*/.*' -o -fstype 'sysfs' -o -fstype 'devfs' - 27.99s user 42.01s system 48% cpu 2:24.44 total
wc -l 1.93s user 0.34s system 1% cpu 2:24.44 total After the change: ~ % time find -L . -mindepth 1 \( -path '*/.*' -o -fstype 'sysfs' -o -fstype 'devfs' -o -fstype 'devtmpfs' -o -fstype 'proc' -o -xattrname 'com.apple.containermanager.uuid' \) -prune -o -type f -print -o -type d -print -o -type l -print 2> /dev/null | wc -l
3185375
find -L . -mindepth 1 \( -path '*/.*' -o -fstype 'sysfs' -o -fstype 'devfs' - 10.74s user 68.69s system 88% cpu 1:29.69 total
wc -l 0.69s user 0.16s system 0% cpu 1:29.69 total Is that worth it? Maybe? There are still a lot of files in the macOS One more note: despite my original suggestion |
Hmm, not as dramatic a result, which is surprising. Still, I think UX-wise, a 2.5 minute max wait is a lot more than a 1.5 minute wait. My results (on a very different computer to my original post!) for your commands:
So far more dramatic. I don't know if that makes me special or you special. FWIW, I have 535 items in |
I agree that we should strive to provide a better default, but I'd like to limit the amount of platform-specific code as much as possible and keep the code short and simple, as they also serve as a reference implementation of things you can do with fzf. I think many users these days who are concerned about the performance are using programs like fd or ripgrep that perform scanning in parallel. This is the one I use. export FZF_CTRL_T_COMMAND='fd --type f --type d --hidden --follow --exclude .git --strip-cwd-prefix' Have you tried these commands? What do you think of them? |
I haven't used fd before but I have used ripgrep to great effect. So I'd understand not changing this behavior for macOS especially since there are good alternatives with good examples in the documentation. One follow-up: what do you think about adding a EDIT: Oh, I suppose one other option would be to cap the number of results returned by find by using e.g. |
Related: #3464 / 208e556#r138774139 I was strictly against the idea of adding options for directory traversal, but I'm reconsidering it. The question is to what extent.
But unfortunately, we would need to add a conditional branch. if [[ -n $FZF_CTRL_T_COMMAND ]]; then
eval "$FZF_CTRL_T_COMMAND" | fzf ...
else
fzf --walker-all --walker-follow ...
fi | ... EDIT: Oh, we could do it without the branch: |
@junegunn The new PR looks great. With this I don't think we'll need to worry about any macOS specific hacks, or if we do they can exist in one location rather than across three different find paths. Thank you! |
Now that I've tested it the new |
I gave it a try. I ran I fired up a new shell and hit Ctrl-T. By default everything looks much the same (ie. all good and working) from the outside. I killed the process after it had some 25 million files. I then went back to my
which I gathered was the option to turn off symlinks. Fired up a new shell and voila! Ctrl-T finishes well under a minute with just over 3 million files. So a couple of extra hoops, but a great result I think. |
Oh, I forgot this is my issue, so am happy to close it with the documented workaround above. Feel free to add more breadcrumbs/caveats/oversights. |
@hraftery @timhillgit Glad to hear the new options are working well for you, thanks. It would also help to add a custom |
man fzf
)Info
Problem / Steps to reproduce
brew install fzf
/usr/local/opt/fzf/install
to install key bindings and shell integrations (ref).Ctrl-T
.While the command can be used right away, searching and selecting the files found, the number of entries grows into the tens of millions and continues endlessly. If left for 10 minutes or so,
fzf
consumes 16GB of RAM, and attempts to continue. With only 16GB of RAM on this machine, the computer becomes increasingly unresponsive and I eventually aborted the command. No lingering effects, but the behaviour can be repeated.Expected Behaviour
Since it's not clear which files are found first, if I'm looking for a particular file I would expect to have to wait until the search is complete to be sure. It turns out the search does not complete before bringing the system down. I expected the search to complete and the full list of files to be searchable.
References
I wasn't able to find many relevant reports, but the search terms are tricky to get right, so I'm littering this issue with key terms to provide a trail for posterity. This is the most relevant issue I could find:
Find process got stuck in the background and led to high CPU usage
The text was updated successfully, but these errors were encountered: