Skip to content

Memory leak in Bun.spawn with subprocess polling #18265

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
iitzkube opened this issue Mar 18, 2025 · 7 comments · Fixed by #18316
Open

Memory leak in Bun.spawn with subprocess polling #18265

iitzkube opened this issue Mar 18, 2025 · 7 comments · Fixed by #18316
Assignees
Labels
bug Something isn't working bun:spawn memory leak performance An issue with performance

Comments

@iitzkube
Copy link

iitzkube commented Mar 18, 2025

What version of Bun is running?

1.2.6-canary.74+74768449b

What platform is your computer?

Linux 6.1.62 x86_64 / Darwin 24.3.0 arm64

What steps can reproduce the bug?

I have been noticing an unusual increase in memory usage with my app and after days of debugging I was able to narrow it down to Bun.spawn being the culprit.

The app polls a short-lived subprocess twice every second and reads its output. The subprocess is very small and quick (the output is ~600 bytes and execution time is ~30ms). Over an extended period of time (~12 hours), the RSS grows significantly, exceeding 1GB. The heap stats stay relatively stable although very slowly trending upwards.

This is unexpected for a lightweight polling operation and suggests that subprocess handles, output buffers, or event listeners are not being properly cleaned up, preventing garbage collection. I started monitoring process.memoryUsage() and here's the readings I got on two different instances:

Image Image

Repro steps:

Note

I added a detailed repro script here and shared the memory readings from it here.

To further validate this, I kept this snippet bellow running overnight:

async function spawn() {
  // mimicking a small subprocess with minimal stdout output (not actual process i'm spawning)
  const proc = Bun.spawn(['sed', '20q', '/etc/passwd'], {
    stdio: ['ignore', 'pipe', 'pipe'],
  });
  // In my app, I'm consuming the stdout stream output but I didn't do it in this test
  // to rule that out so I'm just waiting for the process to exit
  // const output = await Bun.readableStreamToText(proc.stdout);
  await proc.exited;
}

while (true) {
  console.clear();
  console.log(`RSS: ${process.memoryUsage().rss / 1024 / 1024} MB`);
  await spawn();
  await Bun.sleep(1000);
}

And here's the readings I got after 12 hours:

RSS: 701.03125 MB

You can also play around the with sleep duration and see how the memory grows, although it might not happen immediately as you can see in the graphs above. You might want to keep it running for few hours or overnight (in which it would be helpful to log the max RSS too).

I understand RSS is not necessarily an indication of memory leaks, but even when I look the overall memory consumption of my app process from the host system, I see that it reaches around 1.5GB after a while, even for this small repro above.

What is the expected behavior?

Memory usage should stabilize over time.

I ran the repro snippet above with node v23.8.0 using child_process.spawn side by side with Bun (which btw Bun did use waaaaaaay less heap) for the same duration, and this is the reading I got:

RSS: 92.53125 MB

What do you see instead?

Memory grows unbounded to over 1GB

Additional information

If this is not a "memory leak" in the classical sense, could you please me help understand what it is and why I don't get similar RSS readings with other runtimes?

@iitzkube iitzkube added bug Something isn't working needs triage labels Mar 18, 2025
@RiskyMH RiskyMH added performance An issue with performance bun.js Something to do with a Bun-specific API bun:spawn and removed needs triage bun.js Something to do with a Bun-specific API labels Mar 18, 2025
@iitzkube
Copy link
Author

iitzkube commented Mar 18, 2025

Slightly more detailed repro with readableStreamToText added:

const spawn = async (...args: string[]) => {
  const proc = Bun.spawn([...args], {
    stdio: ['ignore', 'pipe', 'ignore'],
  });
  const output = await Bun.readableStreamToText(proc.stdout);
  let ret = false;
  const lines = output.split('\n');
  for (let i = 0; i < lines.length; i++) {
    if (i === lines.length - 1) {
      ret = true;
    }
  }
  return ret;
};

const toMB = (bytes: number) => `${(bytes / 1024 ** 2).toFixed(1)} MB`;

let min = 0;
let max = 0;
let initial = 0;

while (true) {
  // mimicking a small subprocess with minimal stdout output (called twice)
  await spawn('sed', '5q', '/etc/passwd');
  await spawn('sed', '10q', '/etc/passwd');

  if (!initial) {
    min = max = initial = process.memoryUsage().rss;
    continue;
  }

  console.clear();

  const { rss, heapUsed, heapTotal } = process.memoryUsage();

  min = Math.min(min, rss);
  max = Math.max(max, rss);

  console.log('        Pid', process.pid);
  console.log('  Heap Used', toMB(heapUsed));
  console.log(' Heap Total', toMB(heapTotal));
  console.log('Current RSS', toMB(rss));
  console.log('    Min RSS', toMB(min));
  console.log('    Max RSS', toMB(max));

  await Bun.sleep(1000);
}

// btw `--inspect` crashes if i don't have imports/exports
export {};

I'll run this for a while and report back tomorrow.

@iitzkube
Copy link
Author

iitzkube commented Mar 18, 2025

I'll run this for a while and report back tomorrow.

Alright, I deployed the script above exactly as it is as docker container using the oven/bun:latest and this is what the memory looks like over the last ~12 hours:

Image

I took some snapshots of docker status for the container to see its memory usage and here's what it looks from the time the container started until about 12 hours later:

CONTAINER ID   NAME                  CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O       PIDS
c2c5c9efece6   lucid_chandrasekhar   2.24%     34.86MiB / 3.886GiB   0.88%     1.39MB / 15.3kB   131kB / 143kB   17
c2c5c9efece6   lucid_chandrasekhar   2.69%     45.81MiB / 3.886GiB   1.15%     1.55MB / 1.33MB   127MB / 5.8MB   21
c2c5c9efece6   lucid_chandrasekhar   2.40%     97.07MiB / 3.886GiB   2.44%     2.19MB / 6.75MB   391MB / 197MB   26
c2c5c9efece6   lucid_chandrasekhar   2.29%     125.9MiB / 3.886GiB   3.17%     1.63MB / 1.76MB   812MB / 197MB   55
c2c5c9efece6   lucid_chandrasekhar   3.22%     141.5MiB / 3.886GiB   3.56%     1.76MB / 2.96MB   1.03GB / 197MB  55
c2c5c9efece6   lucid_chandrasekhar   6.48%     177.6MiB / 3.886GiB   4.46%     1.84MB / 3.63MB   1.14GB / 197MB  56
c2c5c9efece6   lucid_chandrasekhar   4.54%     191.6MiB / 3.886GiB   4.81%     1.88MB / 4.03MB   1.9GB / 197MB   59
c2c5c9efece6   lucid_chandrasekhar   2.11%     348.7MiB / 3.886GiB   8.76%     5.79MB / 39MB     5.95GB / 197MB  21

I also kept the same script running on my mac, and here's what I got:

        Pid 7221
  Heap Used 6.8 MB
 Heap Total 1.6 MB
Current RSS 355.7 MB
    Min RSS 28.8 MB
    Max RSS 355.7 MB

Activity Monitor is showing 1.39 GB memory usage:

Image

And lastly just for the sake of comparison I deployed another container with only Bun.server and it looks completely normal after 12 hours:

Image
CONTAINER ID   NAME                CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O         PIDS
a50ebfda7de3   docker-bun-simple   1.55%     38.85MiB / 3.886GiB   0.98%     5.33MB / 35.1MB   2.63GB / 5.81MB   17

@Jarred-Sumner
Copy link
Collaborator

very interesting. thank you for the data. one thing that would help us narrow down the cause: does it happen if stdout, stderr, and stdin are all set to ignore?

@iitzkube
Copy link
Author

iitzkube commented Mar 19, 2025

does it happen if stdout, stderr, and stdin are all set to ignore?

@Jarred-Sumner that's a great question. The short answer is no!

The long answer with a bonus observation:

I ran these two scripts in separate containers using oven/bun:latest (revision: v1.2.5+013fdddc6) for around 3 hours:

spawn-stdio-ignore.ts
const spawn = async (...args: string[]) => {
  const proc = Bun.spawn([...args], {
    stdio: ['ignore', 'ignore', 'ignore'],
  });
  await proc.exited;
  return true;
};

const toMB = (bytes: number) => `${(bytes / 1024 ** 2).toFixed(1)} MB`;

let min = 0;
let max = 0;
let initial = 0;

while (true) {
  await spawn('sed', '5q', '/etc/passwd');
  await spawn('sed', '10q', '/etc/passwd');

  if (!initial) {
    min = max = initial = process.memoryUsage().rss;
    continue;
  }

  console.clear();

  const { rss, heapUsed, heapTotal } = process.memoryUsage();

  min = Math.min(min, rss);
  max = Math.max(max, rss);

  console.log('        Pid', process.pid);
  console.log('       Name', 'Docker - Bun.spawn');
  console.log('  Heap Used', toMB(heapUsed));
  console.log(' Heap Total', toMB(heapTotal));
  console.log('Current RSS', toMB(rss));

  console.log('    Min RSS', toMB(min));
  console.log('    Max RSS', toMB(max));

  await Bun.sleep(1000);
}
spawn-stdout-pipe.ts
const spawn = async (...args: string[]) => {
  const proc = Bun.spawn([...args], {
    stdio: ['ignore', 'pipe', 'ignore'],
  });
  const output = await Bun.readableStreamToText(proc.stdout);
  const lines = output.split('\n');
  for (let i = 0; i < lines.length; i++) {
    if (i === lines.length - 1) {
      return true;
    }
  }
  return false;
};

const toMB = (bytes: number) => `${(bytes / 1024 ** 2).toFixed(1)} MB`;

let min = 0;
let max = 0;
let initial = 0;

while (true) {
  await spawn('sed', '5q', '/etc/passwd');
  await spawn('sed', '10q', '/etc/passwd');

  if (!initial) {
    min = max = initial = process.memoryUsage().rss;
    continue;
  }

  console.clear();

  const { rss, heapUsed, heapTotal } = process.memoryUsage();

  min = Math.min(min, rss);
  max = Math.max(max, rss);

  console.log('        Pid', process.pid);
  console.log('       Name', 'Docker - Bun.spawn');
  console.log('  Heap Used', toMB(heapUsed));
  console.log(' Heap Total', toMB(heapTotal));
  console.log('Current RSS', toMB(rss));

  console.log('    Min RSS', toMB(min));
  console.log('    Max RSS', toMB(max));

  await Bun.sleep(1000);
}

The memory usage for each went like this:

spawn-stdio-ignore spawn-stdout-pipe
Image Image

As you can see, with all stdio set to ignore, the RSS kept hovering between the 40-50 MB range, while the the other script with stdout set to pipe, it increased once again.

I also had the stdio-ignore script running on my mac but it died at some point, however last I checked the Activity Monitor was showing 13 MB under the "Memory" column after about an hour, which is super lean! This is the same 1.39 GB value that I mentioned in my previous comment.

Image

Now, what's interesting is that the heap total/used numbers are completely different between the two scripts (both screenshots capture the same duration):

spawn-stdio-ignore spawn-stdout-pipe
Image Image

With stdio set to ignore the heap numbers are much nicer and very consistent! With stdout set to pipe the heap used is all over the place and, unlike the other one, it's higher than the total heap which did not make sense to me at all. I've noticed this too in my previous tests.

@iitzkube
Copy link
Author

iitzkube commented Mar 21, 2025

I'm still experiencing a memory leak in 1.2.6-canary.97+f1cd5abfa which includes the #18316 PR commit.

Please refer to #18316 (comment) for more details.

Can we please reopen this issue?

@Jarred-Sumner Jarred-Sumner reopened this Mar 21, 2025
@Jarred-Sumner
Copy link
Collaborator

Just to rule it out, can you give it a run with await new Response(proc.stdout).text()? They use slightly different code paths.

@iitzkube
Copy link
Author

iitzkube commented Mar 21, 2025

Just to rule it out, can you give it a run with await new Response(proc.stdout).text()? They use slightly different code paths.

It's about the same; the leak is still there.

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working bun:spawn memory leak performance An issue with performance
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants