Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Diagnostic code for fixed bug still present in v4.x's V8 #7574

Closed
misterdjules opened this issue Jul 7, 2016 · 13 comments
Closed

Diagnostic code for fixed bug still present in v4.x's V8 #7574

misterdjules opened this issue Jul 7, 2016 · 13 comments
Assignees
Labels
v8 engine Issues and PRs related to the V8 dependency.

Comments

@misterdjules
Copy link

misterdjules commented Jul 7, 2016

  • Version: v4.0.0 and later in the v4.x release line.
  • Platform: all 64bits platforms except AIX.
  • Subsystem: V8.

Lately, I came across several occurrences of crashes that look like duplicates of #3898. Here's the callstack of one of these occurrences as displayed by mdb_v8's ::jsstack command:

> ::jsstack
native: v8::base::OS::Abort+9
native: v8::internal::PointersUpdatingVisitor::VisitPointers+0x90
native: v8::internal::HeapObject::IterateBody+0x569
native: v8::internal::MarkCompactCollector::EvacuateNewSpaceAndCandidate...
native: v8::internal::MarkCompactCollector::SweepSpaces+0x141
native: v8::internal::MarkCompactCollector::CollectGarbage+0x48
native: v8::internal::Heap::MarkCompact+0x60
native: v8::internal::Heap::PerformGarbageCollection+0x4c0
native: v8::internal::Heap::CollectGarbage+0x238
native: v8::internal::Heap::HandleGCRequest+0x8f
native: v8::internal::StackGuard::HandleInterrupts+0x31c
native: v8::internal::Runtime_StackGuard+0x2b
        (1 internal frame elided)
js:     <anonymous> (as ScatterGather.scatter)
        (1 internal frame elided)
        (1 internal frame elided)
        (1 internal frame elided)
js:     InnerArrayForEach
js:     forEach
        (1 internal frame elided)
js:     <anonymous> (as MessageIndexer.get_modified_threads)
        (1 internal frame elided)
        (1 internal frame elided)
        (1 internal frame elided)
        (1 internal frame elided)
js:     <anonymous> (as RiakCacheClient.write_cache)
        (1 internal frame elided)
js:     <anonymous> (as RiakRequest.callback)
js:     <anonymous> (as RiakRequest.on_response)
        (1 internal frame elided)
        (1 internal frame elided)
js:     <anonymous> (as PoolRequestSet.handle_response)
        (1 internal frame elided)
js:     <anonymous> (as PoolEndpoint.request_succeeded)
js:     <anonymous> (as PoolEndpointRequest.on_end)
        (1 internal frame elided)
js:     emit
js:     endReadableNT
js:     nextTickCallbackWith2Args
js:     _tickDomainCallback
        (1 internal frame elided)
        (1 internal frame elided)
native: v8::internal::Execution::Call+0x107
native: v8::Function::Call+0xff
native: v8::Function::Call+0x41
native: node::AsyncWrap::MakeCallback+0x22c
native: node::StreamBase::EmitData+0xdd
native: node::StreamWrap::OnReadImpl+0x14a
native: node::StreamWrap::OnRead+0x7c
native: uv__read+0x2ef
native: uv__stream_io+0x2a8
native: uv__io_poll+0x22a
native: uv_run+0x15e                  
native: node::Start+0x558
native: _start+0x6c

Unfortunately, I can't reproduce that crash and I can't share the core files that were provided to me.

My understanding was that, as mentioned in #3898, that crash was triggered by some diagnostic code that was added in order to investigate another V8 issue.

What surprised me is that it seems that the fix for the original issue was integrated in the v4.x branch as part of the upgrade to V8 4.5.103.24 with c431725 but the diagnostic code is still there.

The commit that upgrade V8 to version 4.5.103.24 in node v4.x doesn't show the fix for the original issue (v8/v8@8606664) in the GitHub web UI, but using git show c43172578e3e10c2de84fc2ce0d6a7fb02a440d8 in a local checkout of the v4.x-staging branch shows that the fix was indeed integrated into the v4.x and v4.x-staging branches.

Another thing that surprised me is that c431725 represents the upgrade of V8 to version 4.5.103.24, but the fix for the original issue (v8/v8@8606664) doesn't seem to be part of V8's 4.5.103.24 release:

➜  v8 git:(master) ✗ git tag --contains 8606664b37f4dc4b42106563984c19e4f72d9d3a | grep 4.5.103.24
➜  v8 git:(master) ✗

It's very likely that I'm missing something about how release branches/tags are organized in V8's repository though.

Now that raises a few questions:

  1. The current situation seems to be that node v4.x branches have the fix for a bug but that the diagnostic code for that bug, which makes some node programs crash, is still present. Is that on purpose, or is it an inconsistency that fell through the cracks?
  2. If the fix for that bug is present, how come the diagnostic code still triggers crashes? If I understand correctly, the diagnostic code aborts if a JS object has an address whose value may have been overwritten by a double. "Whether the address of an object may have been overwritten by a double" is determined by checking if its address is > 1 << 48. Is it possible for a valid JS object to have an address that is > 1 << 48 for a 64bits process? If so, does that mean that the diagnostic code was overly conservative and triggered false positives? Basically, I'm trying to determine whether it makes sense to remove that diagnostic code by back porting https://codereview.chromium.org/1420253002 to v4.x-staging now that the original bug has been fixed.

FWIW, I tested a back port of https://codereview.chromium.org/1420253002 to v4.x-staging with make check and make test-v8 and it doesn't trigger any regression.

/cc @nodejs/v8.

@misterdjules misterdjules added v8 engine Issues and PRs related to the V8 dependency. v4.x labels Jul 7, 2016
@misterdjules
Copy link
Author

misterdjules commented Jul 7, 2016

Another thing that surprised me is that c431725 represents the upgrade of V8 to version 4.5.103.24, but the fix for the original issue (v8/v8@8606664) doesn't seem to be part of V8's 4.5.103.24 release:

➜  v8 git:(master) ✗ git tag --contains 8606664b37f4dc4b42106563984c19e4f72d9d3a | grep 4.5.103.24
➜  v8 git:(master) ✗

It's very likely that I'm missing something about how release branches/tags are organized in V8's repository though.

Indeed, I was missing something. The fix for the original issue was integrated into V8's 4.5.103.24 release with v8/v8@ef23c3a. I'm still wondering why the diagnostic code is still there.

@MylesBorins
Copy link
Contributor

/cc @ofrobots @nodejs/v8 @nodejs/lts

should we be manually backporting misterdjules@0ee8690 to v4.x-staging?

@MylesBorins MylesBorins self-assigned this Jul 7, 2016
@bnoordhuis
Copy link
Member

should we be manually backporting misterdjules/node-1@0ee8690 to v4.x-staging?

Seems fine to me. It should bump V8_PATCH_LEVEL in deps/v8/include/v8-version.h though.

@ofrobots
Copy link
Contributor

ofrobots commented Jul 7, 2016

It's very likely that I'm missing something about how release branches/tags are organized in V8's repository though.

For posterity, to check whether a particular V8 commit has been cherry-picked back to release lines, you can use tools/release/mergeinfo.py in a V8 checkout

❯ tools/release/mergeinfo.py 8606664b37f4dc4b42106563984c19e4f72d9d3a
1.) Searching for "8606664b37f4dc4b42106563984c19e4f72d9d3a"
=====================ORIGINAL COMMIT START===================
commit 8606664b37f4dc4b42106563984c19e4f72d9d3a
Author: hpayer <[email protected]>
Date:   Mon Aug 17 08:24:13 2015 -0700

    Filter out slot buffer slots, that point to SMIs in dead objects.

    The following situation may happen which reproduces this bug:
    (1) We allocate JSObject A on an evacuation candidate.
    (2) We allocate JSObject B on a non-evacuation candidate.
    (3) Incremental marking starts and marks object A and B.
    (4) We create a reference from B.field = A; which records the slot B.field since A is on an evacuation candidate.
    (5) After that we write a SMI into B.field.
    (6) After that B goes into dictionary mode and shrinks its original size. B.field is now outside of the JSObject, i.e B.field is in memory that will be freed by the sweeper threads.
    (7) GC is triggered.
    (8) BUG: Slots buffer filtering walks over the slots buffer, SMIs are not filtered out because we assumed that SMIs are just ignored when the slots get updated later. However, recorded SMI slots of dead objects may be overwritten by double values at evacuation time.
    (9) During evacuation, a heap number that looks like a valid pointer is moved over B.field.
    (10) The slots buffer is scanned for updates, follows B.field since it looks like a pointer (the double value looks like a pointer), and crashes.

    BUG=chromium:519577,chromium:454297
    LOG=y

    Review URL: https://codereview.chromium.org/1286343004

    Cr-Commit-Position: refs/heads/master@{#30200}
=====================ORIGINAL COMMIT END=====================
2.) General information:
Is LKGR: True
Is on Canary: 2487
3.) Found follow-up commits, reverts and ports:
4.) Found merges:
ef23c3a Version 4.5.103.24 (cherry-pick)
---Merged to:
branch-heads/4.5
  origin/4.5-lkgr
Finished successfully

It seems that the fix was merged into V8 4.5 as part of the above cherry-pick. However, the diagnostic code wasn't removed until October 23, by which time V8 4.5 was already a dead branch:

❯ tools/release/mergeinfo.py 22c5e464c94ec4455731fdb6d09c82514e355535
1.) Searching for "22c5e464c94ec4455731fdb6d09c82514e355535"
=====================ORIGINAL COMMIT START===================
commit 22c5e464c94ec4455731fdb6d09c82514e355535
Author: hpayer <[email protected]>
Date:   Fri Oct 23 06:45:27 2015 -0700

    [heap] Remove debugging code of crbug/454297.

    BUG=

    Review URL: https://codereview.chromium.org/1420253002

    Cr-Commit-Position: refs/heads/master@{#31523}
=====================ORIGINAL COMMIT END=====================
2.) General information:
Is LKGR: True
Is on Canary: 2547
3.) Found follow-up commits, reverts and ports:
4.) Found merges:
Finished successfully

@misterdjules
Copy link
Author

@ofrobots Thank you for the info on tools/release/mergeinfo.py.

Do you have any opinion on question #2 in my comment above:

If the fix for that bug is present, how come the diagnostic code still triggers crashes? If I understand correctly, the diagnostic code aborts if a JS object has an address whose value may have been overwritten by a double. "Whether the address of an object may have been overwritten by a double" is determined by checking if its address is > 1 << 48. Is it possible for a valid JS object to have an address that is > 1 << 48 for a 64bits process? If so, does that mean that the diagnostic code was overly conservative and triggered false positives? Basically, I'm trying to determine whether it makes sense to remove that diagnostic code by back porting https://codereview.chromium.org/1420253002 to v4.x-staging now that the original bug has been fixed.

?

@ofrobots
Copy link
Contributor

ofrobots commented Jul 7, 2016

Is it possible for a valid object have an address that is > 1 << 48 for a 64bits process?

The diagnostic code is using unsigned arithmetic, so yes, on x86-64, it is possible for an address to be larger than 1 << 48 unsigned (although I am not sure whether linux grants those mappings to user space. No idea about other OSes). x86-64 uses 48-bit sign extended virtual address space.

@misterdjules: What OS and architecture are your core dumps from? If x86-64, I would expect heap_obj->address() to have 2 most significant bytes to be 0xffff.

@misterdjules
Copy link
Author

@ofrobots

The diagnostic code is using unsigned arithmetic, so yes, on x86-64, it is possible for an address to be larger than 1 << 48 unsigned (although I am not sure whether linux grants those mappings to user space. No idea about other OSes). x86-64 uses 48-bit sign extended virtual address space.

Right, my question was more specific to V8's JS heap management than about addresses in any process' native heap. If valid JS objects can have addresses > 1 << 48, then I fail to see how the diagnostic code, which makes V8 abort when such an object is found, makes sense. So I must be missing something.

What OS and architecture are your core dumps from?

x86-64, the diagnostic code is only enabled on that target architecture.

If x86-64, I would expect heap_obj->address() to have 2 most significant bytes to be 0xffff.

Why?

@ofrobots
Copy link
Contributor

ofrobots commented Jul 7, 2016

If the diagnostic code is asserting, then you have an address larger than 1 << 48. Legal addresses larger than 1 << 48 would have the top 16 bits set. I would be a concerned if that wasn't the case. What is the actual value you see?

@misterdjules
Copy link
Author

misterdjules commented Jul 7, 2016

@ofrobots Can you quote the parts of the comments you're responding to, otherwise it's difficult to know which part you're responding to, and the discussion becomes confusing to me.

If the diagnostic code is asserting, then you have an address larger than 1 << 48. Legal addresses larger than 1 << 48 would have the top 16 bits set. I would be a concerned if that wasn't the case.

Is it in response to:

If x86-64, I would expect heap_obj->address() to have 2 most significant bytes to be 0xffff.

Why?

?

If so, I thought that by "I would expect heap_obj->address() to have 2 most significant bytes to be 0xffff", you meant that JS heap objects whose address have their 2 most significant bytes to be 0xffff are valid. Which seems contradictory to the diagnostic code that crashes when such an object is found.

@ofrobots
Copy link
Contributor

ofrobots commented Jul 7, 2016

Which seems contradictory to the diagnostic code that crashes when such an object is found.

Yes, I agree with this. I am trying get a better understanding of the situation that you are describing from your core dumps. That is why I was asking about the actual value of the address (or just the top 2 bytes, if you are paranoid) you see on which you hit the assert. Here are possible scenarios:

  • 0x0000: Not possible, as you are hitting the assert.
  • 0xffff: Surprising, but possible. These would be legal addresses if the OS (which you didn't specify) is granting them to user space. The diagnostic code is incorrect in this case. Also note that the diagnostic code conflates "OS" and "architecture".
  • Something else: Concerning. Somehow we have an invalid value in the address slot.

@misterdjules
Copy link
Author

That is why I was asking about the actual value of the address (or just the top 2 bytes, if you are paranoid) you see on which you hit the assert.

This is a bit tricky due to the fact that a lot of this diagnostic code and its callers are inlined.

0xffff: Surprising, but possible. These would be legal addresses if the OS (which you didn't specify) is granting them to user space.

These cores were generated on SmartOS. It turns out that on SmartOS (and probably most OSes derived from Solaris) mmapped pages can be mapped at addresses with the two most significant bytes set. That was what I was missing. The output of pmap on one of these core files illustrates this:

[jgilli@dev ~/]$ pmap core-file
core 'core-file' of 45834: node --nouse-idle-notification /some/program.js
0000000000400000      22412K r-x--  /opt/local/bin/node
00000000019F2000        104K rw---  /opt/local/bin/node
0000000001A0C000      23828K rw---    [ heap ]
000000F0B4000000       1024K rw---    [ anon ]
000000F0B4100000       1024K rw---    [ anon ]
000000F0B4200000       2048K rw---    [ anon ]
000000F0B5000000       1024K rw---    [ anon ]
000000F0B5100000       1024K rw---    [ anon ]
000000F0B5200000       2048K rw---    [ anon ]
00000277B9600000       1024K rw---    [ anon ]
00000493B3800000       1024K rw---    [ anon ]
000005279DB00000       1024K rw---    [ anon ]
000008E749E00000       1024K rw---    [ anon ]
0000095B77700000       1024K rw---    [ anon ]
00000B07BBB00000       1024K rw---    [ anon ]
00000C0054700000       1024K rw---    [ anon ]
00000D3AA7800000       1024K rw---    [ anon ]
00000DC990B00000       1024K rw---    [ anon ]
00000FC06C800000       1024K rw---    [ anon ]
00000FEDCD100000       1024K rw---    [ anon ]
0000117F97700000       1024K rw---    [ anon ]
000011823B500000       1024K rw---    [ anon ]
0000119FF3AD2000        256K rw---    [ anon ]
0000119FF3B12000       3840K rw---    [ anon ]
00001280F4600000       1024K rw---    [ anon ]
0000145F82600000       1024K rw---    [ anon ]
000014D936500000       1024K rw---    [ anon ]
000016D265A00000       1024K rw---    [ anon ]
000019BD8C900000       1044K rw---    [ anon ]
00001B129BF00000        812K rw---    [ anon ]
00001BB54F300000       1024K rw---    [ anon ]
00001C1A06100000       1024K rw---    [ anon ]
00001DD4F8300000       1024K rw---    [ anon ]
00001DF887200000       1024K rw---    [ anon ]
00001E55BBF00000       1024K rw---    [ anon ]
00002269B0100000       1024K rw---    [ anon ]
000023384F100000       1024K rw---    [ anon ]
000024479FD00000         76K rw---    [ anon ]
00002544C8600000       1024K rw---    [ anon ]
0000282E46F00000        128K rw---    [ anon ]
00002C6C63500000       1024K rw---    [ anon ]
00002FDE31800000       1024K rw---    [ anon ]
00003006EE200000       1024K rw---    [ anon ]
000030FE37000000         20K rw---    [ anon ]
000030FE37006000          4K rwx--    [ anon ]
000030FE37007000          4K rwx--    [ anon ]
000030FE37100000         20K rw---    [ anon ]
000030FE37106000          4K rwx--    [ anon ]
000030FE37200000         20K rw---    [ anon ]
000030FE37206000          4K rwx--    [ anon ]
000030FE37300000         20K rw---    [ anon ]
000030FE37306000        264K rwx--    [ anon ]
000030FE38F00000         20K rw---    [ anon ]
000030FE38F06000        996K rwx--    [ anon ]
000030FE39700000         20K rw---    [ anon ]
000030FE39706000        996K rwx--    [ anon ]
000030FE39800000         20K rw---    [ anon ]
000030FE39806000        996K rwx--    [ anon ]
000030FE3F000000         20K rw---    [ anon ]
000030FE3F006000        996K rwx--    [ anon ]
00003491895E5000        128K rw---    [ anon ]
0000349189605000        128K rw---    [ anon ]
0000349189625000        256K rw---    [ anon ]
00003C546E500000       1024K rw---    [ anon ]
00003D77F7900000       1024K rw---    [ anon ]
FFFFFD7FCC240000         48K r-x--  /foo/bar/node_modules/bcrypt/build/Release/bcrypt_lib.node
FFFFFD7FCC25B000          8K rw---  /foo/bar/node_modules/bcrypt/build/Release/bcrypt_lib.node
FFFFFD7FCC260000         16K r-x--  /foo/bar/node_modules/sse4_crc32/build/Release/sse4_crc32.node
FFFFFD7FCC273000          4K rw---  /foo/bar/node_modules/sse4_crc32/build/Release/sse4_crc32.node
FFFFFD7FCC274000          8K rw---  /foo/bar/node_modules/sse4_crc32/build/Release/sse4_crc32.node
FFFFFD7FCC2E0000         96K r-x--  /opt/local/gcc49/x86_64-sun-solaris2.11/lib/amd64/libgcc_s.so.1
FFFFFD7FCC307000          8K rw---  /opt/local/gcc49/x86_64-sun-solaris2.11/lib/amd64/libgcc_s.so.1
FFFFFD7FCC310000       1120K r-x--  /opt/local/gcc49/x86_64-sun-solaris2.11/lib/amd64/libstdc++.so.6.0.20
FFFFFD7FCC437000         44K rw---  /opt/local/gcc49/x86_64-sun-solaris2.11/lib/amd64/libstdc++.so.6.0.20
FFFFFD7FCC442000         84K rw---  /opt/local/gcc49/x86_64-sun-solaris2.11/lib/amd64/libstdc++.so.6.0.20
FFFFFD7FCF4B0000       1076K r-x--  /opt/local/gcc47/x86_64-sun-solaris2.11/lib/amd64/libstdc++.so.6.0.17
FFFFFD7FCF5CC000         40K rw---  /opt/local/gcc47/x86_64-sun-solaris2.11/lib/amd64/libstdc++.so.6.0.17
FFFFFD7FCF5D6000         84K rw---  /opt/local/gcc47/x86_64-sun-solaris2.11/lib/amd64/libstdc++.so.6.0.17
FFFFFD7FD2900000         96K r-x--  /opt/local/gcc47/x86_64-sun-solaris2.11/lib/amd64/libgcc_s.so.1
FFFFFD7FD2927000          4K rw---  /opt/local/gcc47/x86_64-sun-solaris2.11/lib/amd64/libgcc_s.so.1
FFFFFD7FF9520000          4K r-x--  /lib/amd64/libsendfile.so.1
FFFFFD7FF9531000          4K rw---  /lib/amd64/libsendfile.so.1
FFFFFD7FFD200000       1024K rw---    [ anon ]
FFFFFD7FFD61F000          4K rw---    [ stack tid=10 ]
FFFFFD7FFD81E000          4K rw---    [ stack tid=9 ]
FFFFFD7FFDA1D000          4K rw---    [ stack tid=8 ]
FFFFFD7FFDC1C000          4K rw---    [ stack tid=7 ]
FFFFFD7FFDC1E000          8K r-x--  /lib/amd64/librt.so.1
FFFFFD7FFE001000          4K rw---    [ stack tid=6 ]
FFFFFD7FFE1FF000          8K rw---    [ stack tid=5 ]
FFFFFD7FFE3FE000          8K rw---    [ stack tid=4 ]
FFFFFD7FFE5FD000          8K rw---    [ stack tid=3 ]
FFFFFD7FFE600000          8K r-x--  /lib/amd64/libkstat.so.1
FFFFFD7FFE612000          4K rw---  /lib/amd64/libkstat.so.1
FFFFFD7FFE7B0000         64K rwx--    [ anon ]
FFFFFD7FFE9CA000          8K rw---    [ stack tid=2 ]
FFFFFD7FFE9CD000         12K r-x--  /lib/amd64/libpthread.so.1
FFFFFD7FFEA50000         72K r-x--  /lib/amd64/libsocket.so.1
FFFFFD7FFEA72000          4K rw---  /lib/amd64/libsocket.so.1
FFFFFD7FFEBD0000        476K r-x--  /lib/amd64/libumem.so.1
FFFFFD7FFEC56000          4K rwx--  /lib/amd64/libumem.so.1
FFFFFD7FFEC67000        144K rw---  /lib/amd64/libumem.so.1
FFFFFD7FFEC8B000         44K rw---  /lib/amd64/libumem.so.1
FFFFFD7FFED00000        532K r-x--  /lib/amd64/libnsl.so.1
FFFFFD7FFED95000         12K rw---  /lib/amd64/libnsl.so.1
FFFFFD7FFED98000         32K rw---  /lib/amd64/libnsl.so.1
FFFFFD7FFEED0000        396K r-x--  /lib/amd64/libm.so.2
FFFFFD7FFEF43000          8K rw---  /lib/amd64/libm.so.2
FFFFFD7FFF020000          4K r----    [ anon ]
FFFFFD7FFF030000          4K rwx--    [ anon ]
FFFFFD7FFF040000          4K rwx--    [ anon ]
FFFFFD7FFF050000          4K rwx--    [ anon ]
FFFFFD7FFF060000          4K rwx--    [ anon ]
FFFFFD7FFF070000          4K r-x--    [ anon ]
FFFFFD7FFF080000         64K rw---    [ anon ]
FFFFFD7FFF095000        128K rw---    [ anon ]
FFFFFD7FFF0B6000          4K rwx--    [ anon ]
FFFFFD7FFF0C0000         64K rwx--    [ anon ]
FFFFFD7FFF0E0000         24K rwx--    [ anon ]
FFFFFD7FFF0F0000          4K rwx--    [ anon ]
FFFFFD7FFF100000          4K rwx--    [ anon ]
FFFFFD7FFF110000       1520K r-x--  /lib/amd64/libc.so.1
FFFFFD7FFF29C000         44K rw---  /lib/amd64/libc.so.1
FFFFFD7FFF2A7000         16K rw---  /lib/amd64/libc.so.1
FFFFFD7FFF2B0000          4K rwx--    [ anon ]
FFFFFD7FFF2C0000          4K rwx--    [ anon ]
FFFFFD7FFF2D0000          4K rwx--    [ anon ]
FFFFFD7FFF2E0000          4K rwx--    [ anon ]
FFFFFD7FFF2F0000          4K rwx--    [ anon ]
FFFFFD7FFF300000          4K rwx--    [ anon ]
FFFFFD7FFF310000          4K rwx--    [ anon ]
FFFFFD7FFF320000          4K rwx--    [ anon ]
FFFFFD7FFF330000          4K r----    [ anon ]
FFFFFD7FFF340000          4K rwx--    [ anon ]
FFFFFD7FFF350000          4K rwx--    [ anon ]
FFFFFD7FFF360000          4K rw---    [ anon ]
FFFFFD7FFF370000          4K rw---    [ anon ]
FFFFFD7FFF380000          4K rwx--    [ anon ]
FFFFFD7FFF390000          4K rwx--    [ anon ]
FFFFFD7FFF398000        332K r-x--  /lib/amd64/ld.so.1
FFFFFD7FFF3FB000          8K rwx--  /lib/amd64/ld.so.1
FFFFFD7FFF3FD000          8K rwx--  /lib/amd64/ld.so.1
FFFFFD7FFFDEF000         68K rw---    [ stack ]
         total       104360K
[jgilli@dev ~/]$

In the output above, we can see that some [anon] mappings (mmapped pages, used for allocating space on the JS heap) are mapped at virtual addresses > 2**48, like this instance:

FFFFFD7FFD200000       1024K rw---    [ anon ]

So it seems the diagnostic code was probably triggering false positives on SmartOS (and probably most OSes derived from Solaris). Thus, I would think it is safe to remove it, since these crashes in that diagnostic would not indicate that the original bug wasn't fixed, but instead that the diagnostic code was not properly working on Solaris/SmartOS.

@ofrobots Does that sound reasonable to you?

@ofrobots
Copy link
Contributor

ofrobots commented Jul 7, 2016

Thanks for posting the pmap output; based on that I agree that the dumps look like false positives caused by imperfect diagnostic code.

@misterdjules
Copy link
Author

misterdjules commented Sep 15, 2016

Closing this issue as I believe this was fixed by #7584 and it landed in v4.x with 4107b5d.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
v8 engine Issues and PRs related to the V8 dependency.
Projects
None yet
Development

No branches or pull requests

4 participants