-
-
Notifications
You must be signed in to change notification settings - Fork 17.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible memory leak #3413
Comments
Thanks for the report! Is this reproducible outside of Google App Engine? I.e. can you share the app and instructions on how to trigger the memory leak so we can take some memory dumps and track down where the leak is occurring? Another trick you can try is isolating the memory leak to Express vs Node.js core by removing the Express wrapper from your Hello World and seeing if the leak is still present or not. I can help you do this if you can provide some code :) |
Google support reported that they ran their hello world program outside of their platform and was able to reproduce the issue (that is associated with the last graph on the init post). It was being tested with a simple |
I will run it locally and run that /**
* Copyright 2017, Google, Inc.
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
// [START app]
const http = require('http');
const server = http.createServer(app);
function app (req, res) {
switch (`${req.method} ${req.url}`) {
case 'GET /':
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');
res.setHeader('Content-Length', '13');
res.end('Hello, world!');
break;
default:
res.statusCode = 404;
res.setHeader('Content-Type', 'text/html');
res.setHeader('Content-Length', '13');
res.end('404 Not Found');
break;
}
}
// Start the server
const PORT = process.env.PORT || 8080;
server.listen(PORT, () => {
console.log(`App listening on port ${PORT}`);
console.log('Press Ctrl+C to quit.');
});
// [END app] |
Hi @mdcotter 👋, I'm from the other end of the Google Cloud support team that you have been talking. Also, hi @dougwilson 👋. I have some more information on this. I can reproduce the leak with and without express, but only under heavy load ( |
Hi @ofrobots welcome :) ! I just got finished doing the |
The leaks starts after ~200 seconds of load and I wasn't able to reproduce it with low concurrency (e.g. 20). BTW, I am observing the leak with 8.4.0, so I suspect that this is a long standing slow leak that somehow has gone unnoticed for a while. |
@ofrobots gotcha. I'm going to try some higher concurrency and for much longer to see how I can help out in tracking it down. Let me know what you find as well, and I'll post back here as I find things 👍 |
@dougwilson i deployed the basic-node hello world to GAE. It will only receive GET /_ah/health calls from the app engine monitoring. i will monitor its memory consumption |
I was running the plain-node and express hello world programs locally using vs I was running |
Thanks @mdcotter . I think I'm seeing it here as well, though it takes me about an hour and a half to begin seeing the tiny upward trend, so the turn around time is likely to be very long on my side, since it pretty much puts my only machine out of commission for that hour and a half and I need to get back to work here, so I'll run again later tonight and then I'll take some more stabs perhaps over the weekend. |
In case this helps, from the dumps I see some crypto blocks that seems to be persisting in there. You can try adding |
The leak is so tiny that there isn't much to see in the heap dumps. I have a suspicion that maybe even this has to do with nodejs/node#9668 , because even if you do nothing with Express, there is still a few small objects that are allocated on the I'm going to try out the following app later tonight if I don't have another idea: /**
* Copyright 2017, Google, Inc.
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
'use strict';
// [START app]
const http = require('http');
const server = http.createServer(app);
function app (req, res) {
req.res = res;
res.req = req;
res.locals = Object.create(null);
switch (`${req.method} ${req.url}`) {
case 'GET /':
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');
res.setHeader('Content-Length', '13');
res.end('Hello, world!');
break;
default:
res.statusCode = 404;
res.setHeader('Content-Type', 'text/html');
res.setHeader('Content-Length', '13');
res.end('404 Not Found');
break;
}
}
// Start the server
const PORT = process.env.PORT || 8080;
server.listen(PORT, () => {
console.log(`App listening on port ${PORT}`);
console.log('Press Ctrl+C to quit.');
});
// [END app] |
P.S. I also just noticed that Hello, World! program has a mistake in it: |
@mdcotter can you share what version of node you are seeing this with? Also, I've seen that memory usage eventually rises when you start handling requests but it will stabilize at some point. We run a bunch of production apps with express that handle a fair amount of traffic and we haven't noticed any major increases in memory per request. |
I have been playing around a bit tonight and I mean, there seems to be some kind of off-heap increase, but not sure if it can really make an OOM condition. I've been working to isolate what exactly is accumulating off-heap, though. With I used plain HTTP there is pretty much a completely flat off-heap usage, but introducing Express causes it to slowly grow until I stop watching it (grows very, very slowly). This biggest issue with off-heap growth is that there should not be a possible way to cause this from directly within JavaScript, only C++ addins, and similar, unless the leak is within a Node.js built in itself (right??). |
At least with Node I am currently running things with Node 6.11.1 to see if the same leak is present there as well. |
Awesome @ofrobots that looks like what I'm seeing. I believe the following change to Express can work-around this v8 bug: diff --git a/lib/router/index.js b/lib/router/index.js
index 51db4c28..be1e0dca 100644
--- a/lib/router/index.js
+++ b/lib/router/index.js
@@ -157,7 +157,9 @@ proto.handle = function handle(req, res, out) {
var done = restore(out, req, 'baseUrl', 'next', 'params');
// setup next layer
- req.next = next;
+ req.next = function _next () {
+ return next.apply(this, arguments)
+ }
// for options requests, respond with a default if nothing else responds
if (req.method === 'OPTIONS') { |
The only issue with that change is Express will then use almost 2x the heap space (but... no leak?). |
I've opened a V8 bug for this: https://bugs.chromium.org/p/v8/issues/detail?id=6800 to get V8 GC folks to weigh in. |
An update: there was indeed a leak in V8, and we have a fix in V8 already. Bad news: this leak doesn't exist in the version of V8 (5.1) that ships with Node 6.x. So it seems that the leak in 6.x is different. @dougwilson regarding #3413 (comment), why do you think that chaining the Regardless, @mdcotter it would still be worth running an experiment with the proposed change from @dougwilson and report back if you notice any differences. Next step for me: I will run the sampling heap profiler against Node 6.x and see if I can witness any leaks. |
Hi @ofrobots sorry I didn't reply before; I did not get an email notification from GitHub for your comment; just saw in the GitHub notifications now. Regarding the patch, I don't think that is directly fixing the leak, but at least seems to not trigger that GC leak you were seeing, likely by changing the way the GC is interacting with the underlying data structures. |
So is there anything further to do here? Anything that should be changed on Express or is this just a v8 bug? |
Hey @dougwilson , @mdcotter teammate here. We ended up migrating our code base to a Compute Instance and so far, no memory leaks detected. Maybe a GAE problem? |
Sorry for the late response. Given that the leak is extremely slow, it was been hard to isolate it down. I have been able to confirm that this is not an issue in Node.js 6.x either (the Node 8.x issue has been fixed upstream in V8 and released as part of Node 8 LTS). The investigation suggests that this might be an issue in the background logging daemon on GAE. The GAE team is looking into this. This bug can be closed. |
Hi all, Although as discussed that the problem is not in Express, I wanted to share our findings on the issue. We recently deployed a really small Express app and saw the memory piling up un-related to the traffic it received. With the default instance setup the service crashed in 8 hours, while upping the memory to 4Gb the service stayed up several days until crashing. In addition, our Java services running on Flex had the same behaviour. We deployed the same Express app to a similar Compute Engine instance (2 CPU 2Gb memory) and saw that the memory did not increase constantly. Under heavy load test it rose of course, but returned into minimal level (~10%) after the load stopped. Due to reasons in DNS settings, we had to expose that service through AppEngine to be able to run it in staging environment. So we deployed a really small NGINX instance that responses to the health checks with 200 OK, but all the other requests it proxies to the Compute Engine instance. Funny thing was that the memory started piling up on this NGINX service on AppEngine Flex, similarly than when having the NodeJS app on GAE Flex. See the graph of NGINX on GAE Flex below: Based on the logs of a Nodejs -app running in GAE, there seems to be a NGINX container running next to the Nodejs which routes all the requests into the respective containers (I assume). So our findings seem to backup @ofrobots and the GAE team that the problem is beyond NodeJS and in somewhere the GAE infra. We reduced the period of health checks and it seemed to slow down the memory piling up, so maybe it is indeed in the logging daemon, for NGINX maybe. Hope that GAE team will find the problem and fix it soon! =) Edit: Related GoogleCloudPlatform/fluent-plugin-google-cloud#187 |
Is this still being looked into? Newest version of 4.x express and newest version of v8 node grows forever in memory. And it's not slow. It also doesn't release. I spam hit the / route and memory just goes up forever. My server in production has to be restarted every 4 hours because of some memory leak and I can't figure out what my code is doing wrong. So I tried to diagnose it. And an empty express app does that. Node v8.11.2
|
@digitalpacman : #3413 (comment) We looked into this extensively and it was tracked to a bug in v8, and there isn't a possible work-around in Express.js. We're out of our league here, and there isn't anything further we understand how do to. The best way to move this forward is if someone has the expertise here to help. @ofrobots was kind enough above, but not sure really where it stands. If you believe there is a fix that can be made to Express.js source code, by all means, please open a pull request with it and we will ❤️you. |
I'm experiencing some strange memory leak on a @dougwilson So you are saying that nothing can be done on express part? Maybe this issue should remain open? |
The OP issue that was investigated has been summarized above and there is nothing further to do. If you believe you are having the exact same issue, opening it again won't cause me to suddenly know how to fix it, and was tracked back to an issue in v8, which is unfixable by code in express. Of course as mentioned above if you know a fix, please feel free to make a pull request. If you are experiencing a different memory leak than the OP and the investigation here, you need to open a new issue with all the code and details in order for us to reproduce the issue so we can investigate your issue. |
So there could be a possible memory leak on Express v4.x
We have been in constant communications with the Google App Engine team to troubleshoot our servers crashing thought to high memory usage
and appears to be a memory leak in Express.
Our API is served by Express 4.14.0, the screenshot aboce is an instance only receiving 6 health checks from Google every 5 seconds and within a few days, it crashes because of high memory usage.
We thought that was a problem within our base code but we decided to throw a Hello World from Google App Engine which is using only a dependency which is Express 4.15.4 and the same result (graph below).
Both instances received health checks from Google every 5 seconds to report their up time and send CPU usage, memory usage, disk usage to Stackdriver
The Google Cloud support team was able to reproduce the issue outside of the GCP with the Hello World app and saw the heaps stack size staying consisitent but the overall process memory growing. Since the only dependencies we were using was express we have decided to open an issue. Here is the memory graph that they had.
We are currently using Node v6.9.11 in our production.
The text was updated successfully, but these errors were encountered: