-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node.js GC in low memory environments #2738
Comments
This doesn't directly answer your questions, but what version of Node are you running? There are significant memory use improvements in the impending 4.0.0 release so you might look forward to that. 4.0.0 should be out any hour now... |
|
@Trott : My question is more general, since on a PaaS you can specify any version of Node.js. I'm looking at 0.10 and 0.12. |
Yes and no. No because node is unaware of the physical memory limit. Yes because it reserves a virtual address range (which can be larger than physical memory) and, once that starts filling up, will GC more frequently. Apropos physical memory (available and total), taking that as an input signal wouldn't do much good because of paging and overcommit. There is potentially little relation between physical memory and what is available to a process. |
The docs for Node-RED at http://nodered.org/docs/hardware/raspberrypi.html reflect this, based on experience here node-red/node-red#191 . A useful flag is |
@svennam92 - I've noticed several of your posts about memory usage in various low memory environments since I'm looking to deploy an app to PaaS such as bluemix. node --max-old-space-size=128 bin/www Have you been successful with limiting the memory consumption? When I run that on Mac OS X with node v0.12.7 and look at activity monitor, the memory usage climbs up to 148.4 MB. Then again, I'm not exactly sure which memory statistics one OS X is showing me right now. The reason I ask is when I'm using cloud foundry, its almost as if the garbage collection hardly activates before it hits the allocated memory limit. It strongly resembles a behavior where node just has no idea its maximum amount of memory to use. Thanks! |
Keep in mind that Surprising to hear that it doesn't seem to run garbage collection at all. Heap allocation is done incrementally... it should run GC's before allocating more heap. Can you use |
Ah! Fair point! I guess I've been too focused on the actual heap size and not distinguishing it from the entire application size. Let me try the --trace-gc and get back to you. Thanks! |
Is there anything here that still needs to be kept open? |
No, let's close. |
Does (For example, having a 256MB memory, should I stay on |
It doesn't have to be a power of 2. I use prime numbers myself. |
I used --max_old_space_size and I again got the memory error. After doing some online research i realized it not underscore but dash like --max-old-space-size |
@gomesNazareth Dashes and underscores both work. |
@bnoordhuis I no idea why but underscore was not working for me |
set it to 75% of the container's memory limit should improve GC behavior in low memory environments see: - cloudfoundry/nodejs-buildpack#82 - nodejs/node#2738
Hi @bnoordhuis, we are currently using NodeJs in a Docker container running in K8s. Java versions less than 9 have an issue where they read the memory from the K8s node instead of the linux cgroup. I'm currently investigating this with Nodejs. I can see that the Docker container only has 1 gb of ram available to it, but the value of "total_available_size" given back from "v8.getHeapStatistics()" is aprrox 1.4 gb, i.e. more than the actual available size. If I create a really long array in node, the process will just be killed and there will be no helpful error message (i.e. out of memory) I understand that if we are running into this, then its a memory leak issue, but Im still wondering if there is a more elegant way about this. Is there any advantage in setting this limit lower? I.E. more garbage collection or a graceful way to handle situations where the amount of memory available is less than what we want to allocate. Also, since this is in a container, there is actually only 1 process running, so we can assume that the memory we define for our container should be available to this node process. The trouble is telling V8 how much total memory it has instead of just heap memory. I would imagine that if v8 were running low Thanks! |
Ben is out of office. Let me try to explain this in my own way: The background is runtimes running under a container / vm things in common between runtimes is that they all have a managed heap. (java or node) thing to remember for runtimes is that they also use native heap (unmanaged by the runtime, managed by the operating system or container) thing to remember for the container is that when they constrain memory, that is not visible to the app (runtime).
Two potential solutions / workarounds:
I guess these are generic to any process, nothing specific to Node.js Let me know your thoughts. |
When running apps in PaaS's, like Bluemix or Heroku, the memory can be very limited. For example, you can run Node.js apps with a total of 512MB available in the container. I believe the Node.js runtime sets a much larger heap limit by default.
To get optimal garbage collection, and avoid the possibility of the a Node.js application hitting an OOM, should I set some parameter to indicate to the Node.js runtime that I am running in a limited memory container? I've read about
--max-old-space-size
but I'm not sure if this is required, or if Node.js assumes some reasonable default.Does the Node.js runtime set heap limits based on the amount of memory available in the app's container/cgroup? If not, has anyone done any investigation on reasonable defaults for setting heap sizes?
The text was updated successfully, but these errors were encountered: