Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node.js GC in low memory environments #2738

Closed
svennam92 opened this issue Sep 8, 2015 · 17 comments
Closed

Node.js GC in low memory environments #2738

svennam92 opened this issue Sep 8, 2015 · 17 comments
Labels
memory Issues and PRs related to the memory management or memory footprint. question Issues that look for answers.

Comments

@svennam92
Copy link

When running apps in PaaS's, like Bluemix or Heroku, the memory can be very limited. For example, you can run Node.js apps with a total of 512MB available in the container. I believe the Node.js runtime sets a much larger heap limit by default.

To get optimal garbage collection, and avoid the possibility of the a Node.js application hitting an OOM, should I set some parameter to indicate to the Node.js runtime that I am running in a limited memory container? I've read about --max-old-space-size but I'm not sure if this is required, or if Node.js assumes some reasonable default.

Does the Node.js runtime set heap limits based on the amount of memory available in the app's container/cgroup? If not, has anyone done any investigation on reasonable defaults for setting heap sizes?

@mscdex mscdex added the question Issues that look for answers. label Sep 8, 2015
@Trott
Copy link
Member

Trott commented Sep 8, 2015

This doesn't directly answer your questions, but what version of Node are you running? There are significant memory use improvements in the impending 4.0.0 release so you might look forward to that. 4.0.0 should be out any hour now...

@bnoordhuis
Copy link
Member

I've read about --max-old-space-size but I'm not sure if this is required, or if Node.js assumes some reasonable default.

--max-old-space-size is required, node doesn't auto-tune. There is no good way to gauge the amount of safely available memory on a multi-process / multi-user system.

@ChALkeR ChALkeR added the memory Issues and PRs related to the memory management or memory footprint. label Sep 8, 2015
@svennam92
Copy link
Author

@Trott : My question is more general, since on a PaaS you can specify any version of Node.js. I'm looking at 0.10 and 0.12.
@bnoordhuis : So Node.js doesn't auto-tune... got it. Will node know if the environment I'm running it in has a very small memory constraint? For example, say I run a Node.js application with 512MB, and due to a memory leak, my heap-size is slowly increasing. I expect that Node.js will try to aggressively GC as it gets closer and closer to that memory limit. However, if Node.js has no awareness of the memory constraint, it would hit an OOM error much sooner than I would expect it to.

@bnoordhuis
Copy link
Member

I expect that Node.js will try to aggressively GC as it gets closer and closer to that memory limit

Yes and no. No because node is unaware of the physical memory limit. Yes because it reserves a virtual address range (which can be larger than physical memory) and, once that starts filling up, will GC more frequently.

Apropos physical memory (available and total), taking that as an input signal wouldn't do much good because of paging and overcommit. There is potentially little relation between physical memory and what is available to a process.

@vielmetti
Copy link
Contributor

--max-old-space-size is also relevant running on smaller systems like the Raspberry Pi which don't have much physical memory.

The docs for Node-RED at http://nodered.org/docs/hardware/raspberrypi.html reflect this, based on experience here node-red/node-red#191 . A useful flag is --trace-gc to tap in at a low level to see what is going on.

@random6886
Copy link

@svennam92 - I've noticed several of your posts about memory usage in various low memory environments since I'm looking to deploy an app to PaaS such as bluemix.

node --max-old-space-size=128 bin/www

Have you been successful with limiting the memory consumption? When I run that on Mac OS X with node v0.12.7 and look at activity monitor, the memory usage climbs up to 148.4 MB. Then again, I'm not exactly sure which memory statistics one OS X is showing me right now.

The reason I ask is when I'm using cloud foundry, its almost as if the garbage collection hardly activates before it hits the allocated memory limit. It strongly resembles a behavior where node just has no idea its maximum amount of memory to use.

Thanks!

@svennam92
Copy link
Author

Keep in mind that --max-old-space-size specifies the heap limit for the v8 JS engine that powers Node.js. This doesn't include all the memory the process might be using, such as buffers (for example, if you load very large images or JSON files). OSX will show you the entire memory usage of the node process in its activity monitor. Use process.memoryUsage() programatically to see heap memory usage.

Surprising to hear that it doesn't seem to run garbage collection at all. Heap allocation is done incrementally... it should run GC's before allocating more heap. Can you use --trace-gc flag to identify when GC is being run?

@random6886
Copy link

Ah! Fair point! I guess I've been too focused on the actual heap size and not distinguishing it from the entire application size. Let me try the --trace-gc and get back to you. Thanks!

@Fishrock123
Copy link
Contributor

Is there anything here that still needs to be kept open?

@bnoordhuis
Copy link
Member

No, let's close.

@Mithgol
Copy link

Mithgol commented Nov 23, 2016

Does --max_old_space_size= value have to be a natural power of 2?

(For example, having a 256MB memory, should I stay on --max_old_space_size=128 or try --max_old_space_size=200 as well?)

@bnoordhuis
Copy link
Member

It doesn't have to be a power of 2. I use prime numbers myself.

@gomesNazareth
Copy link

I used --max_old_space_size and I again got the memory error. After doing some online research i realized it not underscore but dash like --max-old-space-size

@bnoordhuis
Copy link
Member

@gomesNazareth Dashes and underscores both work.

@gomesNazareth
Copy link

@bnoordhuis I no idea why but underscore was not working for me

bobzoller added a commit to goodeggs/ranch-baseimage-nodejs that referenced this issue May 13, 2017
set it to 75% of the container's memory limit
should improve GC behavior in low memory environments
see:
- cloudfoundry/nodejs-buildpack#82
- nodejs/node#2738
@corey-aloia
Copy link

corey-aloia commented Aug 1, 2018

Hi @bnoordhuis, we are currently using NodeJs in a Docker container running in K8s. Java versions less than 9 have an issue where they read the memory from the K8s node instead of the linux cgroup. I'm currently investigating this with Nodejs. I can see that the Docker container only has 1 gb of ram available to it, but the value of "total_available_size" given back from "v8.getHeapStatistics()" is aprrox 1.4 gb, i.e. more than the actual available size. If I create a really long array in node, the process will just be killed and there will be no helpful error message (i.e. out of memory)

I understand that if we are running into this, then its a memory leak issue, but Im still wondering if there is a more elegant way about this.

Is there any advantage in setting this limit lower? I.E. more garbage collection or a graceful way to handle situations where the amount of memory available is less than what we want to allocate.

Also, since this is in a container, there is actually only 1 process running, so we can assume that the memory we define for our container should be available to this node process. The trouble is telling V8 how much total memory it has instead of just heap memory. I would imagine that if v8 were running low
on memory it would behave different. Any clarification on this would be great!

Thanks!

@gireeshpunathil
Copy link
Member

Ben is out of office. Let me try to explain this in my own way:

The background is runtimes running under a container / vm

things in common between runtimes is that they all have a managed heap. (java or node)
things in common between containers is that they all can constrain memory (cloudfoundry or docker or k8)

thing to remember for runtimes is that they also use native heap (unmanaged by the runtime, managed by the operating system or container)

thing to remember for the container is that when they constrain memory, that is not visible to the app (runtime).

  1. In Linux generally the memory attributes are obtained from /proc/self/stat. If the cgroups tunable is not reflecting this data, then runtimes are helpless.

  2. Even if this is reflected correctly, the operations in the unmanaged heap (native heap) by the native code are not harnessed by the runtime (example: a native binding in C issuing a malloc beyond the container imposed limit). So the unexpected termination is unavaoidable.

Two potential solutions / workarounds:

  1. Make the runtime container aware (Providing APIs to establish the identity and query limits)
  2. Containers should give leeway to the runtime well before it hit the ceiling and crash. (example a guard block which when touched raising a signal to the process) so that the process can take remedial action or crash gracefully.

I guess these are generic to any process, nothing specific to Node.js

Let me know your thoughts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
memory Issues and PRs related to the memory management or memory footprint. question Issues that look for answers.
Projects
None yet
Development

No branches or pull requests