Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Leak and Segmentation Fault with nextjs-blog Official Example #32526

Closed
macso-hwangoh opened this issue Dec 15, 2021 · 6 comments
Closed
Labels
please add a complete reproduction The issue lacks information for further investigation

Comments

@macso-hwangoh
Copy link

macso-hwangoh commented Dec 15, 2021

What version of Next.js are you using?

latest

What version of Node.js are you using?

14.18.1

What browser are you using?

Chrome

What operating system are you using?

Linux (WSL2 on Windows 10)

How are you deploying your application?

For my production code: Azure WebApps
For the same issue in this minimal example: Locally

Describe the Bug

On my production code in Azure, I have recently experienced a memory leak and consequent segmentation fault. Below is the monitored behaviour where notice that the segmentation fault occurs once at around November 30th and another time at around December 11th:
image

Below the log of the error:

2021-12-11T01:20:33.823018051Z 
2021-12-11T01:20:33.876185160Z <--- Last few GCs --->
2021-12-11T01:20:33.876213461Z 
2021-12-11T01:20:33.876219961Z [50:0x54b7b50] 780278115 ms: Mark-sweep 1565.2 (1734.8) -> 1551.1 (1734.9) MB, 3970.3 / 14.3 ms  (average mu = 0.228, current mu = 0.197) task scavenge might not succeed
2021-12-11T01:20:33.876226461Z [50:0x54b7b50] 780283008 ms: Mark-sweep 1566.7 (1734.9) -> 1551.1 (1734.9) MB, 3991.8 / 1.8 ms  (average mu = 0.208, current mu = 0.184) task scavenge might not succeed
2021-12-11T01:20:33.912446926Z 
2021-12-11T01:20:33.912473927Z 
2021-12-11T01:20:33.912481427Z <--- JS stacktrace --->
2021-12-11T01:20:33.912487327Z 
2021-12-11T01:20:33.944804366Z FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
2021-12-11T01:20:34.922220589Z  1: 0xa25510 node::Abort() [/usr/local/bin/node]
2021-12-11T01:20:34.938116300Z  2: 0x9664d3 node::FatalError(char const*, char const*) [/usr/local/bin/node]
2021-12-11T01:20:34.938144901Z  3: 0xb9a8ee v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
2021-12-11T01:20:34.968732784Z  4: 0xb9ac67 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
2021-12-11T01:20:34.968773186Z  5: 0xd56cd5  [/usr/local/bin/node]
2021-12-11T01:20:34.968780386Z  6: 0xd5785f  [/usr/local/bin/node]
2021-12-11T01:20:34.968785886Z  7: 0xd6569b v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
2021-12-11T01:20:34.968791486Z  8: 0xd6925c v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/local/bin/node]
2021-12-11T01:20:34.976127822Z  9: 0xd2ea2d v8::internal::Factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [/usr/local/bin/node]
2021-12-11T01:20:34.990342879Z 10: 0xd288b4 v8::internal::FactoryBase<v8::internal::Factory>::AllocateRawWithImmortalMap(int, v8::internal::AllocationType, v8::internal::Map, v8::internal::AllocationAlignment) [/usr/local/bin/node]
2021-12-11T01:20:34.991656321Z 11: 0xd2ac01 v8::internal::FactoryBase<v8::internal::Factory>::NewRawTwoByteString(int, v8::internal::AllocationType) [/usr/local/bin/node]
2021-12-11T01:20:35.007109218Z 12: 0xf8cb45 v8::internal::String::SlowFlatten(v8::internal::Isolate*, v8::internal::Handle<v8::internal::ConsString>, v8::internal::AllocationType) [/usr/local/bin/node]
2021-12-11T01:20:35.007167320Z 13: 0x10528bb v8::internal::RegExpImpl::IrregexpExec(v8::internal::Isolate*, v8::internal::Handle<v8::internal::JSRegExp>, v8::internal::Handle<v8::internal::String>, int, v8::internal::Handle<v8::internal::RegExpMatchInfo>) [/usr/local/bin/node]
2021-12-11T01:20:35.007178320Z 14: 0x10a6071 v8::internal::Runtime_RegExpExec(int, unsigned long*, v8::internal::Isolate*) [/usr/local/bin/node]
2021-12-11T01:20:35.007184120Z 15: 0x1426939  [/usr/local/bin/node]
2021-12-11T01:20:36.798139892Z Aborted (core dumped)

2021-12-11T01:20:44.410588112Z error Command failed with exit code 134.
2021-12-11T01:20:44.436323653Z info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

In particular, notice 2021-12-11T01:20:33.944804366Z FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory.

To investigate, I followed the instructions in https://alberic.trancart.net/2020/05/how-fixed-first-memory-leak-nextjs-nodejs/ to recreate it locally and was successful with my production app. To further isolate whether the problem was in my code, I attempted to create the same issue in minimal Nextjs projects, eventually arriving at the default project detailed in https://nextjs.org/learn/basics/create-nextjs-app/setup and was successful in creating a segmentation fault. Therefore, I'm wondering if the behaviour described below should occur at all and, if not, I'm hoping the required fix will also fix my production app.

The issue is as follows: after using npm run dev to run the development environment on my localhost, using Chrome DevTools to inspect and taking heap snapshots whilst loading the server with 25 requests per second, I eventually managed to induce a segmentation fault:

hwangoh@LAPTOP-C6317PJA:~/codes/nextjs-blog$ npm run dev

> @ dev /home/hwangoh/codes/nextjs-blog
> NODE_OPTIONS='--inspect' next dev

Debugger listening on ws://127.0.0.1:9229/79e053ea-224d-4f89-8941-4678c65735ac
For help, see: https://nodejs.org/en/docs/inspector
ready - started server on 0.0.0.0:3000, url: http://localhost:3000
event - compiled client and server successfully in 2.7s (158 modules)
Debugger attached.
wait  - compiling / (client and server)...
(node:23807) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 client/ listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
(Use `node --trace-warnings ...` to show where the warning was created)
(node:23807) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 server/ listeners added to [EventEmitter]. Use emitter.setMaxListeners() to increase limit
event - compiled client and server successfully in 1598 ms (174 modules)
Segmentation fault
npm ERR! code ELIFECYCLE
npm ERR! errno 139
npm ERR! @ dev: `NODE_OPTIONS='--inspect' next dev`
npm ERR! Exit status 139
npm ERR!
npm ERR! Failed at the @ dev script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/hwangoh/.npm/_logs/2021-12-15T05_16_29_952Z-debug.log

Notice in particular the

MaxListenersExceededWarning: Possible EventEmitter memory leak detected.
warning.

I should mention though that the heap snapshots do not seem to indicate an increase in memory before the segmentation fault occurs:

image

where here, the last snapshot stalls on 93% because of the segmentation fault. For larger Nextjs projects, this segmentation fault occurs much earlier; sometimes even unable to complete the first snapshot after loading the requests. Also, although the EventEmitter memory leak warning appears immediately upon loading the local server with requests, the segmentation fault does not appear unless repeated heap snapshots are taken.

I should probably also mention that I performed the same experiment with a vanilla React app and was unable to reproduce the segmentation fault.

Expected Behavior

No memory leak warning should appear nor should a segmentation fault

To Reproduce

These steps follow https://alberic.trancart.net/2020/05/how-fixed-first-memory-leak-nextjs-nodejs/

  1. Download the nextjs-blog project using npx create-next-app nextjs-blog --use-npm --example "https://github.com/vercel/next-learn/tree/master/basics/learn-starter"
  2. To your package.json file scripts, add "dev": "NODE_OPTIONS='--inspect' next dev"
  3. Start the development environment using npm run dev
  4. Go to "chrome://inspect/#devices" and inspect the locally hosted app. Take a heap snapshot
  5. Create a script called "load.sh" containing:
max="$1"
date
echo "url: $2
rate: $max calls / second"
START=$(date +%s);

get () {
  curl -s -v "$1" 2>&1 | tr '\r\n' '\\n' | awk -v date="$(date +'%r')" '{print $0"\n-----", date}' >> /tmp/perf-test.log
}

while true
do
  echo $(($(date +%s) - START)) | awk '{print int($1/60)":"int($1%60)}'
  sleep 1

  for i in `seq 1 $max`
  do
    get $2 &
  done
done
  1. Run ./load.sh 25 "http://localhost:3000/" to send 25 requests per second. Observe the appearance of the warning:

MaxListenersExceededWarning: Possible EventEmitter memory leak detected.

  1. Continue taking heap snapshots until the segmentation fault occurs
@macso-hwangoh macso-hwangoh added the bug Issue was opened via the bug report template. label Dec 15, 2021
@macso-hwangoh macso-hwangoh changed the title Memory Leak and Segmentation Fault with nextjs-blog Example Memory Leak and Segmentation Fault with nextjs-blog Official Example Dec 15, 2021
@timneutkens
Copy link
Member

Just had a look into this, using version 12.0.8 I'm unable to reproduce a memory leak with the steps provided as the heap usage is consistently around 70MB at the most. I've even tried running it with a really low heap size --max-old-space-size=100 and that did not reproduce it.

The eventemitter warning is triggered because there's more than 10 .once listeners when you open up 25 requests at the same time, it's not actually leaking memory as the listener is attached and removed as it passes through the request, if it was not resolved/removed you'd notice it as the request would hang which is not the case here. I guess we could increase the limit to 100+ or so to remove the incorrect warning.

@balazsorban44 balazsorban44 added please add a complete reproduction The issue lacks information for further investigation and removed bug Issue was opened via the bug report template. labels Jan 24, 2022
@macso-hwangoh
Copy link
Author

Hi Tim, thanks for looking into this! Just to doublecheck, did you repeatedly create heap snapshots? What I noticed was although the memory size stays consistent every time I take a snapshot, after taking a few the segmentation fault would occur despite all previous snapshots having the same memory. My uneducated guess would be that the memory usage does not consistently increase in a linear fashion; something triggers it and from there it increases until segmentation fault. There seems to be a bit of stochasticity in when it occurs as sometimes I will need to take 10+ snapshots and sometimes the segmentation fault occurs after only 3. This also seems to parallel the memory issue I'm facing with my production app hosted on Azure: the memory usage will be constant for a couple of days and suddenly will start increasing until a segmentation fault occurs, despite no one accessing the app.

@balazsorban44
Copy link
Member

This issue has been automatically closed because it received no activity for a month and had no reproduction to investigate. If you think this was closed by accident, please leave a comment. If you are running into a similar issue, please open a new issue with a reproduction. Thank you.

@macso-hwangoh
Copy link
Author

Not sure if this helps anyone, but the memory leak has disappeared since we have implemented "Application Insights" to our code following https://docs.microsoft.com/en-us/azure/azure-monitor/app/nodejs#get-started. My suspicion is that the querying of the app externally to obtain insights causes the memory leak, but if the insights are generated from within the code, then this is circumvented.

image

@github-actions
Copy link
Contributor

This closed issue has been automatically locked because it had no new activity for a month. If you are running into a similar issue, please create a new issue with the steps to reproduce. Thank you.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 28, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
please add a complete reproduction The issue lacks information for further investigation
Projects
None yet
Development

No branches or pull requests

3 participants