-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
high memory usage using stream consumer and potentialy memory leaking #179
Comments
We run these producers all day and they handle a large number of messages per day and I am not seeing similar behavior in our own applications. Node and GO are different beasts. Node.js has a garbage collector that runs when it needs to, and sometimes not a moment more. I would reduce node's heap size to a lower amount and see if it ever exceeds it and crashes because it runs out of heap. Otherwise, the garbage collector may just not be running because it doesn't need to. |
@webmakersteve the high memory usage can be reproduced by "consumers" provided by the code above, not "producers". maybe i didn't make that clear? or did i miss something ? |
Was this fixed by changing node versions? |
Same here. Node v6.11.0. Using standard consumer API (non-stream). |
Actually found out that I'm using the defaults for |
UPD: tried the following and still see non-heap memory increasing endlessly:
Giving up, switched to no-kafka |
Sorry to hear it isn't working for you. Were you using the same code linked above? Because that code is outdated. In any event, the code you were using when you ran these tests would be helpful so i could ensure it isn't happening, if you wouldn't mind linking to it if it's easily isolated. |
Stephen, thanks for following up. |
@terrywh I adapted the code above on the current version of librdkafka (2.0.0) and i can't reproduce a memory leak after running for around 8 hours with a stream of data of about 100 messages per second. Was testing on node v7.5.0 |
Here is the code i was running: "use strict";
const Kafka = require("./lib"),
Writable = require("stream").Writable;
var cs = Kafka.createReadStream({
'group.id': 'memory-leak-detector',
'metadata.broker.list': 'broker',
'enable.auto.commit': false
}, {}, {
topics: ['topic'],
waitInterval: 10
});
var cc = cs.consumer;
cc.on("error", function(err) {
console.log("consumer:", err);
}).on("ready", function() {
console.log("ready");
});
cs.on("error", function() {
console.log("consumer stream:", err)
})
cs.pipe(new Writable({
objectMode: true,
write: function(data, encoding, callback) {
console.log(data);
callback(null);
},
})); |
@michallevin after a long journey of trying various kafka node clients, using no-kafka. so far so good. but it's entirely node.js implementation, not a native driver kafka, for good and for bad. good luck |
@hugebdu Wow, surprising. We were thinking the problem might not be in Kafka at all. We are using Highland.js and uploading files to S3... |
@michallevin if you're having memory leak issues in your application, try to isolate it if you can. We've had memory leaks in the software before but I think we've resolved all of them. We run these consumers all day every day, but there still could be leaks in the consumer in edge cases that we may not be hitting. But going to close this issue. Open a new one if you're having trouble! |
@webmakersteve @hugebdu We were just able to solve our issue with |
@michallevin I remember I did try playing with various config opts and still having issues. Also it was crashing completely with node 7 (yup, I know it wasn't LTS) |
@michallevin what would be a decent value for |
@senthil5053 We use 100,000 kbytes, but for several topics and consumers. If you have a single consumer you should be able to use the default. |
@michallevin Thanks. |
Did you fix the problem,I have the same problem. Node Version: v8.15.0 , Could you tell me how to solve the problem, thanks |
Node v7.10.0 node-rdkafka: v0.10.2
topic message get produced at about 500/s, using the below code as consumer, memory go all the way up to 1GB:
12:23
12:44
by set
'enable.auto.commit': true
, i got the following:14:30
15:07
15:54
Compared to, GO with confluent-kafka-go which is also based on librdkafka, and i got a average memory usage of 80MB.
The text was updated successfully, but these errors were encountered: