Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase default 'max_semi_space_size' value to reduce GC overhead in V8 #42511

Closed
JialuZhang-intel opened this issue Mar 29, 2022 · 20 comments · Fixed by #42575
Closed

Increase default 'max_semi_space_size' value to reduce GC overhead in V8 #42511

JialuZhang-intel opened this issue Mar 29, 2022 · 20 comments · Fixed by #42575
Labels
feature request Issues that request new features to be added to Node.js. help wanted Issues that need assistance from volunteers or PRs that need help to proceed. stale v8 engine Issues and PRs related to the V8 dependency.

Comments

@JialuZhang-intel
Copy link
Contributor

What is the problem this feature will solve?

When I use node to run the web-tooling-benchmark, I found that the runtime flag --max_semi_space_size have a big impact on the test result. The total throughput increased about 18% after I pass the runtime flag --max_semi_space_size=128 into node. So I did some investigate about the 'max_semi_space_size' flag.

From some v8 official blogs (Getting garbage collection for free, orinoco-parallel-scavenger), I found there are two garbage collection strategies in V8:

Scavenge GC: fast, high frequency, for young generation objects.

Major GC: take a long time, low frequency, full garbage collections.

When we create a new object by javascript code, the object will be put into semi_space as a young generation object. And when the semi_space is about to use up, V8 engine will trigger the Scavenge GC to clean up the garbage objects in semi_space.

If I use the --max_semi_space_size flag to increase the maximum limit of semi_space size, the scavenge GC occur frequency will decrease. This will bring both advantages and disadvantages:

  • Advantage: throughput improvement. Because the scavenge GC occur frequency decreased, the total GC pause time will also decrease, then node can have more CPU resource to execute the javascript code.
  • Disadvantage: more memory usage. This is obviously, more semi_space size will cost more memory.

It's a trade-off between time and space. V8 set the default max_semi_space_size as 16MB for 64bit system and 8MB for 32bit system (related code). I think it's a heuristic value that mainly considered client device with small RAM size (for example: some android device only have 4GB RMA). But for server scenarios, memory usually isn't the bottleneck, while throughput is the actual bottleneck.

So the problem is:

Whether the currently default max_semi_space_size(16MB/8MB) for V8 is also the best configuration for node?

What is the feature you are proposing to solve the problem?

To solve the problem above, I tuned the --max_semi_space_size (16MB, 32MB, 64MB, 128MB, 256MB) and tested on web-tooling-benchmark and a simple service based on ghost.js. Here is the test results:

image

From the figure we can see that:

  • Peak memory usage increases linearly with max_semi_space_size.
  • Throughput grows fast when max_semi_space_size less than 128MB, then keep flat when max_space_size bigger than 128MB.
  • The scale of the throughput improvement is workload-dependent, probably due to the greater GC pressure from web-tooling-benchmark.

So I think we can choose a better max_semi_space_size value and pass this runtime flag to V8 when node startup.

What alternatives have you considered?

Test environment:

  • Hardware:
    • CPU: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
    • RAM: 500GB
  • Software:
    • OS: Ubuntu 20.04.1 (x86_64)
    • Linux version: 5.11.0-41-generic
    • Docker version: 20.10.11
    • node version: v18.0.0-pre

Test process:

  1. Build the docker container with node binary and workload in it.
  2. Start multi-containers (containers number equals vCPU number) to make sure the system's total CPU usage rate >90%.
  3. Running the workload in started containers concurrently and monitor the system's total memory usage periodically.
@JialuZhang-intel JialuZhang-intel added the feature request Issues that request new features to be added to Node.js. label Mar 29, 2022
@targos targos added the v8 engine Issues and PRs related to the V8 dependency. label Mar 29, 2022
@targos
Copy link
Member

targos commented Mar 29, 2022

@nodejs/v8

@joyeecheung
Copy link
Member

joyeecheung commented Mar 30, 2022

I remember there were similar requests for max_old_space_size (whose relatively small value is the source of a lot of confusion) - if we decide to use different defaults, I suppose these should all be tweaked to an optimized setting, though it's difficult to tell which setting is the best as a one-size-fits-all solution and I think that's why we've been keeping the V8 defaults so far (and another factor is that these are all configurable via the command line and in general Node.js core prefers to let the user choose what's best for their use cases instead of picking one itself)

@JialuZhang-intel
Copy link
Contributor Author

@joyeecheung Thanks for your explanation! Is there any official documents about the Node.js core prefers? And If there are
some documents, can we put this preferable max_semi_space_size configuration in it?

@ronag
Copy link
Member

ronag commented Apr 1, 2022

I guess this also applies to workers and the maxYoungGenerationSizeMb option?

@ronag
Copy link
Member

ronag commented Apr 1, 2022

Actually it doesn't seem to be possible to set semi space for workers? @jasnell @addaleax

Only:

  • max_young
  • max_old

Are possible to set. https://lbwa.github.io/v8-reference/classv8_1_1_resource_constraints.html

@ronag
Copy link
Member

ronag commented Apr 1, 2022

and another factor is that these are all configurable via the command line and in general Node.js core prefers to let the user choose what's best for their use cases instead of picking one itself

The problem with this is that 99% of users don't know what is best for their use case so I guess we have to make some kind of decision in terms of defaults that works best for most of our users. I guess the current defaults are more optimized for browser workloads rather than server workloads? IMHO it might be worth looking into optimizing the defaults to better suit our users.

@joyeecheung
Copy link
Member

And If there are some documents, can we put this preferable max_semi_space_size configuration in it?

We already document --max-old-space-size with some suggestions (https://github.com/nodejs/node/blob/master/doc/api/cli.md#--max-old-space-sizesize-in-megabytes), so adding documentation for the semi space size there makes sense, too.

@JialuZhang-intel
Copy link
Contributor Author

JialuZhang-intel commented Apr 2, 2022

@ronag

The problem with this is that 99% of users don't know what is best for their use case.

I agree with this, and V8 should consider the browser's memory consumption in mobile device with small RAM size. But for server scenarios, memory usually isn't the bottleneck.

V8 has provided the interface set_max_semi_space_size_in_kb() to change the default max_semi_space_size. I found a related issue try to setup the default max_yong and max_old generation size according to system's physical_memory, which uses the ResourceConstraints::ConfigureDefaults interface in V8. But V8 has limited the max heap size as 2GB, and there is a proportional relationship between semi_space_size and heap_size, so the default max_semi_space_size can't large than 16MB. Can we create a similar ConfigureDefaults function for node and choose an optimal configuration for server scenarios?

JialuZhang-intel added a commit to JialuZhang-intel/node that referenced this issue Apr 2, 2022
Add the '--max_semi_space_size' flag into useful V8 option.

Fixes: nodejs#42511
@JialuZhang-intel
Copy link
Contributor Author

@joyeecheung

We already document --max-old-space-size with some suggestions (https://github.com/nodejs/node/blob/master/doc/api/cli.md#--max-old-space-sizesize-in-megabytes), so adding documentation for the semi space size there makes sense, too.

OK, I added the --max_semi_space_size introduction into the document, this is the related PR (#42575).

JialuZhang-intel added a commit to JialuZhang-intel/node that referenced this issue Jun 13, 2022
@fdc-liebreich
Copy link

Unfortunately this doesn't yet allow to use this option via NODE_OPTIONS. It seems, it has to be explicitly allowed in src/node_options.cc. Is there anyone with C++ skills who can fix this?

@JialuZhang-intel
Copy link
Contributor Author

Hi @fdc-liebreich , I not quite clear about this: "Unfortunately this doesn't yet allow to use this option via NODE_OPTIONS." Do you mean Node.js not support passing the runtime flag --max_semi_space_size? Actually, you can pass all the runtime flags defined by V8 into Node.js directly(see flag-definitions.h).

aduh95 pushed a commit that referenced this issue Jun 17, 2022
Add the `--max_semi_space_size` flag into useful V8 option.

Fixes: #42511

PR-URL: #42575
Reviewed-By: Ben Noordhuis <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Michael Dawson <[email protected]>
targos pushed a commit that referenced this issue Jul 12, 2022
Add the `--max_semi_space_size` flag into useful V8 option.

Fixes: #42511

PR-URL: #42575
Reviewed-By: Ben Noordhuis <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Michael Dawson <[email protected]>
@FantasyNeurotic
Copy link

FantasyNeurotic commented Jul 19, 2022

https://www.alibabacloud.com/blog/node-js-application-troubleshooting-manual---comprehensive-gc-problems-and-optimization_594965. some cases about it

targos pushed a commit that referenced this issue Jul 31, 2022
Add the `--max_semi_space_size` flag into useful V8 option.

Fixes: #42511

PR-URL: #42575
Reviewed-By: Ben Noordhuis <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Michael Dawson <[email protected]>
guangwong pushed a commit to noslate-project/node that referenced this issue Oct 10, 2022
Add the `--max_semi_space_size` flag into useful V8 option.

Fixes: nodejs/node#42511

PR-URL: nodejs/node#42575
Reviewed-By: Ben Noordhuis <[email protected]>
Reviewed-By: James M Snell <[email protected]>
Reviewed-By: Michael Dawson <[email protected]>
@mai1x9
Copy link

mai1x9 commented Jan 17, 2023

What is the problem this feature will solve?

When I use node to run the web-tooling-benchmark, I found that the runtime flag --max_semi_space_size have a big impact on the test result. The total throughput increased about 18% after I pass the runtime flag --max_semi_space_size=128 into node. So I did some investigate about the 'max_semi_space_size' flag.

From some v8 official blogs (Getting garbage collection for free, orinoco-parallel-scavenger), I found there are two garbage collection strategies in V8:

Scavenge GC: fast, high frequency, for young generation objects.
Major GC: take a long time, low frequency, full garbage collections.

When we create a new object by javascript code, the object will be put into semi_space as a young generation object. And when the semi_space is about to use up, V8 engine will trigger the Scavenge GC to clean up the garbage objects in semi_space.

If I use the --max_semi_space_size flag to increase the maximum limit of semi_space size, the scavenge GC occur frequency will decrease. This will bring both advantages and disadvantages:

  • Advantage: throughput improvement. Because the scavenge GC occur frequency decreased, the total GC pause time will also decrease, then node can have more CPU resource to execute the javascript code.
  • Disadvantage: more memory usage. This is obviously, more semi_space size will cost more memory.

It's a trade-off between time and space. V8 set the default max_semi_space_size as 16MB for 64bit system and 8MB for 32bit system (related code). I think it's a heuristic value that mainly considered client device with small RAM size (for example: some android device only have 4GB RMA). But for server scenarios, memory usually isn't the bottleneck, while throughput is the actual bottleneck.

So the problem is:

Whether the currently default max_semi_space_size(16MB/8MB) for V8 is also the best configuration for node?

What is the feature you are proposing to solve the problem?

To solve the problem above, I tuned the --max_semi_space_size (16MB, 32MB, 64MB, 128MB, 256MB) and tested on web-tooling-benchmark and a simple service based on ghost.js. Here is the test results:

image

From the figure we can see that:

  • Peak memory usage increases linearly with max_semi_space_size.
  • Throughput grows fast when max_semi_space_size less than 128MB, then keep flat when max_space_size bigger than 128MB.
  • The scale of the throughput improvement is workload-dependent, probably due to the greater GC pressure from web-tooling-benchmark.

So I think we can choose a better max_semi_space_size value and pass this runtime flag to V8 when node startup.

What alternatives have you considered?

Test environment:

  • Hardware:

    • CPU: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
    • RAM: 500GB
  • Software:

    • OS: Ubuntu 20.04.1 (x86_64)
    • Linux version: 5.11.0-41-generic
    • Docker version: 20.10.11
    • node version: v18.0.0-pre

Test process:

  1. Build the docker container with node binary and workload in it.
  2. Start multi-containers (containers number equals vCPU number) to make sure the system's total CPU usage rate >90%.
  3. Running the workload in started containers concurrently and monitor the system's total memory usage periodically.

@JialuZhang-intel I have tried setting max semi space size to 32, 64,128 Mb and ran backend server using express. After loadtesting with apache bench, as rightly mentioned throughput increased for 64Mb(in case of backend rps had increased by 10%), however by repeating test multiple times, rps had actually decreased significantly for one test run, I believe it's due to scavange gc freeing up memory. With 16Mb, rps is quite consistent for all runs and no significant dip in rps found unlike for 64Mb. Setting to 128Mb no significant improvement.

Is still setting semi max space size to 128 Mb or 64 Mb is good ?.

camallen added a commit to zooniverse/sugar that referenced this issue Jan 26, 2023
camallen added a commit to zooniverse/sugar that referenced this issue Jan 26, 2023
@ronag ronag reopened this Mar 8, 2023
@ronag ronag added the tsc-agenda Issues and PRs to discuss during the meetings of the TSC. label Mar 8, 2023
@ronag ronag added the help wanted Issues that need assistance from volunteers or PRs that need help to proceed. label Mar 8, 2023
@ronag
Copy link
Member

ronag commented Mar 8, 2023

We would love to have a PR for this to discuss.

@bnoordhuis
Copy link
Member

Context: #46608 (comment)

@ronag
Copy link
Member

ronag commented Mar 22, 2023

Was discussed on TSC meeting again. There is interest on seeing a PR. Until then there is not much to discuss on TSC level.

vvagaytsev pushed a commit to garden-io/garden that referenced this issue Aug 22, 2023
* perf: speed up `splitLast` function

* perf: skip function wrapping if profiler is disabled

* perf: index files into a tree structure to get files for a subdirectory more quickly

* perf: increase max-semi-space-size for less GC interruptions at the cost of some more memory

See https://github.com/nodejs/node/blob/main/doc/api/cli.md#useful-v8-options
Also see https://www.alibabacloud.com/blog/better-node-application-performance-through-gc-optimization_595119 and nodejs/node#42511 for some details on impact

* refactor: replace Bluebird with native Promises (part 1)

* refactor: replace bluebird with native promises (part 2)

* refactor: replace bluebird with native promises (part 3)

* chore: make the linter happy after all the bluebird replacements

* fix: splitLast behavior with no index found

* chore: linter again

* chore: cleanup file-tree and add header

* refactor: replace pLimit with pMap
@github-actions
Copy link
Contributor

There has been no activity on this feature request for 5 months and it is unlikely to be implemented. It will be closed 6 months after the last non-automated comment.

For more information on how the project manages feature requests, please consult the feature request management document.

@github-actions github-actions bot added the stale label Sep 19, 2023
@github-actions
Copy link
Contributor

There has been no activity on this feature request and it is being closed. If you feel closing this issue is not the right thing to do, please leave a comment.

For more information on how the project manages feature requests, please consult the feature request management document.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 19, 2023
@kurtextrem
Copy link

I guess this issue should stay open as a reminder for making a PR?

@ronag ronag reopened this Oct 19, 2023
@bnoordhuis
Copy link
Member

There's nodejs/performance#67 already. I'm going to close this but feel free to send a PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request Issues that request new features to be added to Node.js. help wanted Issues that need assistance from volunteers or PRs that need help to proceed. stale v8 engine Issues and PRs related to the V8 dependency.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants