-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Buffer
and ArrayBuffer
-based objects trigger mark-sweeps instead of scavenges
#1671
Comments
I really appreciate your finds lately, but I'm not sure what you expect io to do about this. io.js uses native v8 typed arrays since this commit by @bnoordhuis in 2013. |
It depends on the size of the typed array or
|
@benjamingr If this will be confirmed as an upstream (v8) bug, it should be reported to the upstream. |
To be clear, I didn't criticize you opening an issue - I was asking what you think or expect io to be able to do about this. This doesn't reproduce on "pure" d8? |
I don't have a recent d8 version at hand (the one that I have installed is 3.30.something). I will check a bit later, hopefully today. Most probably I expect this to be reported to upstream. And I was not sure that the reason for |
With With an old v8 version (3.30.33.16), Both large and small, both Buffer and Uint8Array objects are allocated in the external memory space. (Checked in iojs + v8 4.3). |
You're forgetting that |
Buffer
and Uint8Array
objects are not collectable by the incremental GCBuffer
and Uint8Array
objects trigger full GC
Well, technically those objects are collected with scavenges, but the problem is that they do not trigger scavenges. A testcase for that: 'use strict';
var count = 0, limit = 1000000;
var start = process.hrtime();
var something = [];
for (var j = 0; j < 500; j++) something.push(Math.random());
var slices = 0;
function call() {
var buffer = new Buffer(16 * 1024);
var bufferx;
for (var j = 0; j < slices; j++) bufferx = something.slice();
count++;
if (count > limit) {
console.log(slices, process.hrtime(start));
slices++;
if (slices > 5) {
process.exit(0);
}
count = 0;
gc();gc();gc();gc();
start = process.hrtime();
}
setImmediate(call);
}
for (var i = 0; i < 20; i++) {
call();
} Results:
You can see that while each increment of |
Buffer
and Uint8Array
objects trigger full GCBuffer
and Uint8Array
objects trigger mark-sweeps instead of scavenges
@trevnorris Thanks. This last test uses 16 KiB buffers, it should be fine now. |
https://groups.google.com/forum/#!topic/v8-users/7bg5ym8t7KU — topic in the v8-users group, with an updated testcase targeting |
Buffer
and Uint8Array
objects trigger mark-sweeps instead of scavengesBuffer
and TypedArray
-based objects trigger mark-sweeps instead of scavenges
Buffer
and TypedArray
-based objects trigger mark-sweeps instead of scavengesBuffer
and ArrayBuffer
-based objects trigger mark-sweeps instead of scavenges
Some IRC logs with @bnoordhuis:
|
would you mind filing a (v8) bug about which collector gets chosen? I think it should be easy to fix |
patch here: https://codereview.chromium.org/1201993002 (even though it requests GC in NEW_SPACE, the GC heuristic might do a mark/compact instead, just not always) |
@jeisinger Thanks! Will test in a moment. |
@jeisinger Does not seem to change anything, at least for the testcase at #2022 (comment) and for the testcases supplied above. |
@jeisinger, issue filed: https://code.google.com/p/v8/issues/detail?id=4209 Edit: Ah, here am I, mentioning the wrong person again. Sorry. Fixed =). I should try to copy-pase instead of typing the @ + first letter and letting GitHub auto-complete. |
@jeisinger Any reasons why the patch could not work? |
The patch kinda does what it's supposed to do... if you --trace-gc you'll see that the first GC is now a scavenge. The benchmark creates external garbage in a busy loop which is kinda unusual behavior. As soon as you enable slightly more realistic behavior (you also create js garbage in the busy loop), the GC heuristic does what it's supposed to do. |
@jeisinger Ah.
With patch:
Yes, it looks like part of the problem is solved, at least Though I am not yet sure that this will change things much even for the «more realistic behaviour». It also doesn't look like that the tripping point for the size of sliced arrays changed in the testcase at https://groups.google.com/forum/#!topic/v8-users/7bg5ym8t7KU . Here is a simplier testcase. How should I change this to be «realistic»? 'use strict';
var count = 0;
function call() {
var buffer = new ArrayBuffer(100 * 1024 * 1024);
if (count++ < 100) setTimeout(call, 100);
}
call();
What do you mean by that? |
The main issue here is that the example you gave doesn't do anything. If the optimizer was super smart it could replace it with an empty program. That makes it difficult to optimize for this test case. |
I had to revert the change again is it resulted in too many scavenges without freeing up enough external memory. If you have a non-trivial benchmark that is negatively affected by this, I'm happy to look into this again. |
I am not sure if this helps or is related, but with Node 7 and Node 8 we have been seeing massive jumps in RSS a few days after launching a process. For a week or so (it varies), things run smoothly without fragmentation, and then suddenly RSS just keeps climbing until it hits our max old space size limit which is 32 GB. This may be anecdotal but it seems that physically rebooting the Ubuntu machine gets things back to normal again for a few days, whereas restarting just the Node process does not. We have done no releases to cause any change, and at the inflection points in the logs I can only see buffers of 30 MB or so being allocated and then released within a few seconds, every few minutes. The load patterns are the same every day. It's just that at some point the RSS starts climbing without coming down. It's not an increase in load or anything like that. |
@jorangreef can you provide the |
@trevnorris, this is before and after the first inflection point in the graph above:
|
We landed proper retaining counters for array buffers in https://chromium-review.googlesource.com/c/519386. What is left is adding a heuristics on some safe point where we can do a Scavenge (e.g. advancing a page). This might be a good project to start contributing to V8 if somebody feels like. Otherwise, we will land bits and pieces as we have spare cycles. The memory consumption issue should probably get its own bug as it is not related to triggering Scavenges instead of Mark-Compact GCs. |
@trevnorris I'm sorry, the chart I provided above a few days ago turned out to be a red herring. I found the hockey stick jumps in RSS were due to hash table resizing in our storage system (the green line below): |
Did anyone ever follow up on this? |
@mlippautz @nodejs/v8 @ChALkeR Is this still an issue in Node.js 11? (I imagine so.) |
@Trott Just tested on v11.10, triggering scavenges manually still speeds up my benchmarks. |
If I am not mistaken, there's a patch upcoming soon that might solve this issue. The tracking bug is https://bugs.chromium.org/p/v8/issues/detail?id=9701. |
@BridgeAR That's great news, thanks! I can test that patch if it lands cleanly on our v8. |
This still affects latest master. |
New patch in https://chromium-review.googlesource.com/c/v8/v8/+/1803614 landed cleanly. master without patch: $ ./node.master i1671.js
[ 0, 752038399 ] 132.38671875
[ 0, 775638647 ] 137.05859375
[ 0, 612542735 ] 147.8359375
[ 0, 798670672 ] 143.47265625
[ 0, 686932159 ] 147.91015625
[ 0, 652596915 ] 147.9453125
[ 0, 767230859 ] 142.64453125
[ 0, 704772366 ] 140.1171875
[ 0, 721586181 ] 132.13671875
[ 0, 752640443 ] 140.2265625
[ 0, 804869781 ] 140.30078125
[ 0, 797604137 ] 148.34765625
[ 0, 765060783 ] 148.3359375
^C
$ ./node.master --expose-gc i1671.js
[ 0, 510874711 ] 83.9765625
[ 0, 473433967 ] 84.859375
[ 0, 472625270 ] 85.38671875
[ 0, 469358379 ] 87.3359375
[ 0, 475308306 ] 87.3359375
[ 0, 474982840 ] 91.46484375
[ 0, 473101868 ] 91.46875
[ 0, 475998364 ] 91.46875
[ 0, 479278017 ] 91.46875
[ 0, 472223707 ] 93.08984375
^C master with patch: $ ./node.patched i1671.js
[ 0, 689388941 ] 100.10546875
[ 0, 692828663 ] 112.4140625
[ 0, 579924797 ] 106.2890625
[ 0, 590019919 ] 116.75
[ 0, 572502033 ] 111.28515625
[ 0, 617208788 ] 110.2109375
[ 0, 580989751 ] 118.20703125
[ 0, 609618953 ] 112.43359375
[ 0, 603448117 ] 119.36328125
^C
$ ./node.patched --expose-gc i1671.js
[ 0, 544357967 ] 74.984375
[ 0, 492740927 ] 85.03515625
[ 0, 537714774 ] 91.72265625
[ 0, 511871085 ] 91.734375
[ 0, 493428551 ] 94.171875
[ 0, 493925924 ] 94.17578125
[ 0, 491575551 ] 98.0859375
[ 0, 497687444 ] 98.08984375
[ 0, 492114640 ] 98.08984375
[ 0, 484245828 ] 98.09765625
[ 0, 487521134 ] 98.1015625
[ 0, 492115625 ] 98.10546875 There seems to be a significant improvement in automatic gc version a slight slowdown in manual gc version. Manual gc is still faster, but it's hand-tweaked for this specific setup here, so it might be hard to reach that. Will file a PR against node master to run the perf checks. I expect noticeable improvements. |
With the patch, all unnecessary mark-sweeps have been replaced with scavenges! |
#31007 (or an update to never V8 version with the corresponding commit) might actually close this issue. |
Original commit message: [heap] Perform GCs on v8::BackingStore allocation This adds heuristics to perform young and full GCs on allocation of external ArrayBuffer backing stores. Young GCs are performed proactively based on the external backing store bytes for the young generation. Full GCs are performed only if the allocation fails. Subsequent CLs will add heuristics to start incremental full GCs based on the external backing store bytes. This will allow us to remove AdjustAmountOfExternalMemory for ArrayBuffers. Bug: v8:9701, chromium:1008938 Change-Id: I0e8688f582989518926c38260b5cf14e2ca93f84 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/1803614 Commit-Queue: Ulan Degenbaev <[email protected]> Reviewed-by: Dominik Inführ <[email protected]> Reviewed-by: Hannes Payer <[email protected]> Cr-Commit-Position: refs/heads/master@{#65480} PR-URL: #31007 Refs: v8/v8@687d865 Refs: #1671 Reviewed-By: Michaël Zasso <[email protected]> Reviewed-By: Gus Caplan <[email protected]> Reviewed-By: Rich Trott <[email protected]> Reviewed-By: Anna Henningsen <[email protected]>
Original commit message: [heap] Perform GCs on v8::BackingStore allocation This adds heuristics to perform young and full GCs on allocation of external ArrayBuffer backing stores. Young GCs are performed proactively based on the external backing store bytes for the young generation. Full GCs are performed only if the allocation fails. Subsequent CLs will add heuristics to start incremental full GCs based on the external backing store bytes. This will allow us to remove AdjustAmountOfExternalMemory for ArrayBuffers. Bug: v8:9701, chromium:1008938 Change-Id: I0e8688f582989518926c38260b5cf14e2ca93f84 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/1803614 Commit-Queue: Ulan Degenbaev <[email protected]> Reviewed-by: Dominik Inführ <[email protected]> Reviewed-by: Hannes Payer <[email protected]> Cr-Commit-Position: refs/heads/master@{#65480} Refs: v8/v8@687d865 Refs: nodejs#1671
I believe this is now resolved with #31007. 🎉 |
Original commit message: [heap] Perform GCs on v8::BackingStore allocation This adds heuristics to perform young and full GCs on allocation of external ArrayBuffer backing stores. Young GCs are performed proactively based on the external backing store bytes for the young generation. Full GCs are performed only if the allocation fails. Subsequent CLs will add heuristics to start incremental full GCs based on the external backing store bytes. This will allow us to remove AdjustAmountOfExternalMemory for ArrayBuffers. Bug: v8:9701, chromium:1008938 Change-Id: I0e8688f582989518926c38260b5cf14e2ca93f84 Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/1803614 Commit-Queue: Ulan Degenbaev <[email protected]> Reviewed-by: Dominik Inführ <[email protected]> Reviewed-by: Hannes Payer <[email protected]> Cr-Commit-Position: refs/heads/master@{#65480} PR-URL: #31007 Refs: v8/v8@687d865 Refs: #1671 Reviewed-By: Michaël Zasso <[email protected]> Reviewed-By: Gus Caplan <[email protected]> Reviewed-By: Rich Trott <[email protected]> Reviewed-By: Anna Henningsen <[email protected]>
Testcase:
You could see that this causes a lot of
gctype: 2
GC events.The text was updated successfully, but these errors were encountered: