Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

InotifyBackend hungs forever when inotify_init1 returns error #159

Open
amierzwicki opened this issue Oct 16, 2023 · 1 comment
Open

InotifyBackend hungs forever when inotify_init1 returns error #159

amierzwicki opened this issue Oct 16, 2023 · 1 comment

Comments

@amierzwicki
Copy link

amierzwicki commented Oct 16, 2023

Scenario

Sytem: linux (ubuntu 22.04)

When there are too many open inotify instances the parcel build will hung without any clue to why it's stuck.

Expected behavior

Parcel should throw clear exception - e.g. "There are too many open files, cannot initialize inotify instance"

Stretch goal

Suggest to user what are possible actions to fix it (e.g. writing higher value to /proc/sys/fs/inotify/max_user_instances).

Current behavior

When there are too many open inotify instances the inotify_init1 returns -1 with errno "Too many open files". This causes exception in InotifyBackend: https://github.com/parcel-bundler/watcher/blob/master/src/linux/InotifyBackend.cc#L25.

When this happens the Backend::run will block forever due to insufficient error handling (https://github.com/parcel-bundler/watcher/blob/master/src/Backend.cc#L103):

image

The fix will require a bit more than just calling mStartedSignal.notify() (error needs to be populated up the stack) - the above just highlights in what state the process ends up when this error happens.

Steps to reproduce

  1. Setup simple parcel project (package.json + src/index.js is sufficient)
  2. Set /proc/sys/fs/inotify/max_user_instances to some small value (e.g. 1)
  3. Run parcel build
  4. Parcel will hung forever:
> [email protected] build
> parcel build

Workaround

Naturally the problem of too many inotify instances needs to be fixed (even when this bug will be resolved) - either by closing old processes in system or by increasing the limit echo 256 | sudo tee /proc/sys/fs/inotify/max_user_instances (default is often 128).

@amierzwicki amierzwicki changed the title InotifyBackend hungs forever when InotifyBackend hungs forever when inotify_init1 returns error Oct 16, 2023
@jsimomaa
Copy link

(Just as FYI for possible crawlers with the same problem than I had)

I encountered this bug as well during a gatsby build:

gatsby build --verbose
verbose 0.537943215 set gatsby_log_level: "verbose"
verbose 0.53914066 set gatsby_executing_command: "build"
verbose 0.539580282 loading local command from: /usr/src/app/node_modules/gatsby/dist/commands/build.js
verbose 1.741638654 running command: build
verbose 1.742558817 Running build in "production" environment

And stuck here forever. GDB helped me to trace the issue here:

(gdb) bt
#0  __futex_abstimed_wait_common64 (private=0, cancel=true, abstime=0x0, op=393, expected=0, futex_word=0x7fb2546d5fb8) at ./nptl/futex-internal.c:57
#1  __futex_abstimed_wait_common (futex_word=futex_word@entry=0x7fb2546d5fb8, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0, cancel=cancel@entry=true) at ./nptl/futex-internal.c:87
#2  0x00007fb255840e0b in __GI___futex_abstimed_wait_cancelable64 (futex_word=futex_word@entry=0x7fb2546d5fb8, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=private@entry=0) at ./nptl/futex-internal.c:139
#3  0x00007fb255843468 in __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x7fb2546d5f68, cond=0x7fb2546d5f90) at ./nptl/pthread_cond_wait.c:503
#4  ___pthread_cond_wait (cond=0x7fb2546d5f90, mutex=0x7fb2546d5f68) at ./nptl/pthread_cond_wait.c:618
#5  0x00007fb255b4941d in std::condition_variable::wait(std::unique_lock<std::mutex>&) () from /lib/x86_64-linux-gnu/libstdc++.so.6
#6  0x00007fb236c3e13f in Backend::run() () from /usr/src/app/node_modules/@parcel/watcher-linux-x64-glibc/watcher.node
#7  0x00007fb236c3f0bd in Backend::getShared(std::basic_string<char, std::char_traits<char>, std::allocator<char> >) () from /usr/src/app/node_modules/@parcel/watcher-linux-x64-glibc/watcher.node
#8  0x00007fb236c3f137 in Backend::getShared(std::basic_string<char, std::char_traits<char>, std::allocator<char> >) () from /usr/src/app/node_modules/@parcel/watcher-linux-x64-glibc/watcher.node
#9  0x00007fb236c26154 in getBackend(Napi::Env, Napi::Value) () from /usr/src/app/node_modules/@parcel/watcher-linux-x64-glibc/watcher.node
#10 0x00007fb236c281dd in writeSnapshot(Napi::CallbackInfo const&) () from /usr/src/app/node_modules/@parcel/watcher-linux-x64-glibc/watcher.node
#11 0x00007fb236c2ccea in Napi::details::CallbackData<Napi::Value (*)(Napi::CallbackInfo const&), Napi::Value>::Wrapper(napi_env__*, napi_callback_info__*) () from /usr/src/app/node_modules/@parcel/watcher-linux-x64-glibc/watcher.node
#12 0x0000000000b35769 in v8impl::(anonymous namespace)::FunctionCallbackWrapper::Invoke(v8::FunctionCallbackInfo<v8::Value> const&) ()
#13 0x0000000000dcd3e0 in v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, v8::internal::BuiltinArguments) ()
#14 0x0000000000dce91f in v8::internal::Builtin_HandleApiCall(int, unsigned long*, v8::internal::Isolate*) ()
#15 0x000000000170dfb9 in Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit ()
#16 0x0000000001691f10 in Builtins_InterpreterEntryTrampoline ()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants