Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory usage remains high even after destroying sockets #3978

Closed
ayeressian opened this issue Nov 23, 2015 · 17 comments
Closed

Memory usage remains high even after destroying sockets #3978

ayeressian opened this issue Nov 23, 2015 · 17 comments
Labels
invalid Issues and PRs that are invalid.

Comments

@ayeressian
Copy link

I've created a simple nodejs server and client. They interact with each other via 15000 tcp sockets.

client code:

'use strict';

const net = require('net');

for (let i = 0; i < 15000; ++i) {
    let socket = new net.Socket();
    socket.connect(6000, '127.0.0.1', () => {
        console.log('Connected');
        socket.write('data to server');
    });

    socket.on('data', data => {
        console.log(data);
    });

    socket.on('close', () => {
        console.log('Connection closed');
    });
}

server code:

'use strict';

const net = require('net');

let sockets = [];

let server = net.createServer(socket => {
    socket.write('blabla from server');
    socket.on('data', data => {
        console.log(data);
    });
    sockets.push(socket);
    if (sockets.length >= 15000) {
        setTimeout(() => {
            console.log('cleanup start');
            for (let socket of sockets) {
                socket.end();
                socket.destroy();
                socket.unref();
            }
            console.log('cleaned up and ready');
        }, 80000);
    }
});

if (global.gc) {
    setInterval(() => {
        global.gc();
    }, 15000);
}

setInterval(() => {
    console.log(process.memoryUsage());
}, 5000);

server.listen(6000, '127.0.0.1');

They send and receive messages. During creation of sockets the memory usage gets high. But after destroying the sockets I expect the memory usage to get low, which doesn't happen.

Stackoverflow url

@ChALkeR ChALkeR added the invalid Issues and PRs that are invalid. label Nov 23, 2015
@ayeressian
Copy link
Author

@ChALkeR I see you've added invalid label to this issue. May I know why?

@ChALkeR
Copy link
Member

ChALkeR commented Nov 23, 2015

But after destroying the sockets I expect the memory usage to get low, which doesn't happen.

Several reasons for that.

  1. Inaccessible objects are cleaned up only by gc runs, which do not happen immediately.
  2. A single gc run doesn't clean up all of those objects.
  3. Most of memory allocators (including glibc) do not guarantee you to lower the memory usage after a free. That's true for C and C++ world, too. That memory remains claimed by the application for further memory allocations.

@ChALkeR
Copy link
Member

ChALkeR commented Nov 23, 2015

@ayeressian I was still writing a comment at that moment =).

@ayeressian
Copy link
Author

@ChALkeR

  1. I run the node with v8 --expose_gc flag and in code I call global.gc once every 15 seconds. And my issue is permanent.
  2. I performed gc several times before testing memory usage.
  3. I have memory issues in my real nodejs application. The memory usage will go as high until it starts to use virtual memory. So the memory never gets reclaimed.

@targos
Copy link
Member

targos commented Nov 23, 2015

You should probably remove the sockets from the sockets array after cleanup

@ayeressian
Copy link
Author

@targos The reference to sockets array gets lost after setTimeout so GC should remove it. You can test it yourself it will not effect memory usage.

@ChALkeR
Copy link
Member

ChALkeR commented Nov 23, 2015

@ayeressian

The memory usage will go as high until it starts to use virtual memory. So the memory never gets reclaimed.

Do you have a better testcase for that? For example, if looping the above testcase for several times (with gc runs) would raise the memory usage over the limit (resulting in a crash) — this would be an issue.

@targos
Copy link
Member

targos commented Nov 23, 2015

I don't see how it can get lost. It's a global variable. Anyway it's just a suggestion, and I cannot test it myself. The client code of the testcase you provided fails to run on my computer.

@ayeressian
Copy link
Author

@targos what is your node version?

@targos
Copy link
Member

targos commented Nov 23, 2015

v5.1.0

> node client.js
(libuv) kqueue(): Too many open files in system
events.js:141
      throw er; // Unhandled 'error' event
      ^

Error: connect ENFILE 127.0.0.1:6000 - Local (undefined:undefined)

@ayeressian
Copy link
Author

@targos you need to increase the ulimit of your system. Or you can decrease the number of sockets.

@ayeressian
Copy link
Author

@targos every open socket in application requires at least one open file in system. By default the number of open files that are allowed per process in OS is less than 15000. So you need to increase that limit by ulimit command. I assume you are using unix or linux.

@ayeressian
Copy link
Author

@ChALkeR I will try to improve the test case

@ChALkeR
Copy link
Member

ChALkeR commented Nov 23, 2015

@ayeressian
I ran your testcase for several times (one server, several clients one after another), clearing the sockets after each client run and performing a gc several times on a cleanup.

The results (I also altered the logging a bit, it now logs at the start and at the end):

{ rss: 21630976, heapTotal: 9275392, heapUsed: 4075136 }
{ rss: 61427712, heapTotal: 51572736, heapUsed: 28804664 }
cleanup start
cleaned up and ready
{ rss: 61386752, heapTotal: 40221440, heapUsed: 4047592 }
{ rss: 79761408, heapTotal: 53636608, heapUsed: 24469120 }
cleanup start
cleaned up and ready
{ rss: 66392064, heapTotal: 40221440, heapUsed: 4094960 }
{ rss: 80715776, heapTotal: 53636608, heapUsed: 24342520 }
cleanup start
cleaned up and ready
{ rss: 67485696, heapTotal: 40221440, heapUsed: 4094696 }
{ rss: 81551360, heapTotal: 54668544, heapUsed: 24268520 }
cleanup start
cleaned up and ready
{ rss: 67047424, heapTotal: 40221440, heapUsed: 4100120 }

I do not see a leak.

@ayeressian
Copy link
Author

@ChALkeR you are right. Thanks for your help.

@ChALkeR
Copy link
Member

ChALkeR commented Nov 23, 2015

@ayeressian As for the memory allocators (number 3 in #3978 (comment)), check out this example (C++):

#include <string>
#include <vector>
#include <cstdio>
#include <iostream>
#include <malloc.h>

using namespace std;

vector<string> *x, *y;

int main() {
    x = new vector<string>();
    y = new vector<string>();
    for (int i = 0; i < 5 * 1024 * 1024; i++) {
       x->push_back(to_string(i) + string(" test"));
    }
    y->push_back(to_string(0) + string(" test"));
    x->clear();
    delete x;
    cout << "ready" << endl;
    getchar();
    y->clear();
    delete y;
    return 0;
}

It would not give the memory consumed by x back to the system (assuming glibc malloc implementation) when you clean up and delete x untill you delete y (which happens after a keypress). But it would have reused it for further allocations done by the same program if there were any.

@ayeressian
Copy link
Author

@ChALkeR hmm... interesting. I think this behaviour should be dependent on OS memory management algorithm.
I think for further node memory evaluation the heapUsed property of process.memoryUsage() will come handy.
Thanks again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invalid Issues and PRs that are invalid.
Projects
None yet
Development

No branches or pull requests

3 participants