-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stream memory leak in 20.11.1 #51936
Comments
Did you run on
|
Thanks for reporting. Can you include a script to reproduce? |
@bienzaaron In my case, the memory leak can also be observed in |
Does it crash for an OOM? We really need a script to reproduce. It can be that the optimization in question unveiled a bug somewhere in Winston. |
Yes, it does, I let the code run its course this time and it ends up crashing as follows:
For reference
where And afterwards I ran it with the revert patch applied and here are the two memory/cpu usage graphs again:
The following snippets of code seem to be enough to reproduce the problem - link. On one hand - perhaps this code just produces so many logs that winston can't manage to process them quickly enough so they accrue in-memory and eventually crash with OOM. |
@mcollina Could you please provide any updates? Thank you. |
I spent some time playing with your demo and it's leaking memory in Node v20.11.1 but not v20.10.0. Moreover, I haven't identified the cause of the leak yet and why it's showing up in the convoluted example. More importantly, several EventEmitter leaks are showing up in both. Can you simplify your example only using Node.js core? |
Unfortunately don't have a more concrete example without using Tested on my staging environment: Also tested with the simplified test case which only uses |
Then... closing. |
Version
v20.11.1
Platform
Linux xxx SMP PREEMPT Tue Jan 9 09:56:30 CST 2024 aarch64 aarch64 aarch64 GNU/Linux (Debian-based distribution)
Subsystem
No response
What steps will reproduce the bug?
Using
winston
with multiple custom transports and a lot of logger instances. Also requires a fairly large volume of logs being produced.How often does it reproduce? Is there a required condition?
Seems to happen every time we run our code.
Doesn't happen on
v20.10.0
, this bug requiresnode > v20.10.0
.We tested and changing the
UV_USE_IO_URING
value doesn't fix the bug (it seems to trade CPU usage vs RAM leak, but performance has regressed either way).What is the expected behavior? Why is that the expected behavior?
Memory leak does not happen when logging huge volume of logs. It's expected because it used to be that way before upgrading from
v20.10.0
tov20.11.1
(and IO_URING disabling doesn't seem to be at fault).What do you see instead?
Here's a RAM consumption graph from a staging environment where we reproduced the memory leak.
The first three runs with memory leaks in this graph are when running on stock NodeJS
v20.11.1
.The last two runs are run with a custom NodeJS built from
v20.11.1
source by reverting commitdd52068
.Additional information
No response
The text was updated successfully, but these errors were encountered: