Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GC regression in 0.11 #3467

Closed
stevewallin opened this issue Oct 27, 2018 · 18 comments
Closed

GC regression in 0.11 #3467

stevewallin opened this issue Oct 27, 2018 · 18 comments
Assignees

Comments

@stevewallin
Copy link

Seeing a GC regression in 0.11 when running the TechEmpower Jetty benchmarks on the latest Adopt Linux nightly.

java -version openjdk version "1.8.0_192" OpenJDK Runtime Environment (build 1.8.0_192-201810260329-b12) Eclipse OpenJ9 VM (build master-1721076a, JRE 1.8.0 Linux amd64-64-Bit Compressed References 20181026_76 (JIT enabled, AOT enabled) OpenJ9 - 1721076a OMR - 2f4896a3 JCL - dc7639dee6 based on jdk8u192-b12)
There are repeated java.lang.OutOfMemoryError issued.
jetty: JVMDUMP039I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" at 2018/10/27 11:18:26 - please wait. jetty: JVMDUMP039I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" at 2018/10/27 11:18:26 - please wait. jetty: JVMDUMP032I JVM requested System dump using '/jetty/core.20181027.111826.9.0001.dmp' in response to an event jetty: JVMDUMP010I System dump written to /jetty/core.20181027.111826.9.0001.dmp jetty: JVMDUMP032I JVM requested Heap dump using '/jetty/heapdump.20181027.111826.9.0002.phd' in response to an event jetty: JVMDUMP010I Heap dump written to /jetty/heapdump.20181027.111826.9.0002.phd jetty: JVMDUMP032I JVM requested Heap dump using '/jetty/heapdump.20181027.111826.9.0003.phd' in response to an event jetty: JVMDUMP010I Heap dump written to /jetty/heapdump.20181027.111826.9.0003.phd jetty: JVMDUMP032I JVM requested Java dump using '/jetty/javacore.20181027.111826.9.0004.txt' in response to an event jetty: JVMDUMP010I Java dump written to /jetty/javacore.20181027.111826.9.0004.txt jetty: JVMDUMP032I JVM requested Snap dump using '/jetty/Snap.20181027.111826.9.0006.trc' in response to an event jetty: JVMDUMP010I Snap dump written to /jetty/Snap.20181027.111826.9.0006.trc jetty: JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".

Test has previously run clean every time on all builds tested.
./tfb --type plaintext --test jetty --duration 60 --concurrency-levels 512 --pipeline-concurrency-levels 16384
Does not occur with -verbose:gc on but happy to capture comparison runs if needed.

javacore.20181027.111826.9.0004.txt
javacore.20181027.111826.9.0005.txt
javacore.20181027.111830.9.0009.txt
javacore.20181027.111831.9.0012.txt

phd, dmp, trc files are available

@pshipton
Copy link
Member

@dmitripivkine

@gacholio
Copy link
Contributor

As this is a systhrow, the core dump files would be very helpful.

@stevewallin
Copy link
Author

Had to split the dump into a 4 part zip. Please remove the .zip extension from z01->z03 to expand.

core.20181027.111826.9.0001.dmp.zip
core.20181027.111826.9.0001.dmp.z01.zip
core.20181027.111826.9.0001.dmp.z02.zip
core.20181027.111826.9.0001.dmp.z03.zip

@dmitripivkine dmitripivkine self-assigned this Oct 29, 2018
@dmitripivkine
Copy link
Contributor

dmitripivkine commented Oct 29, 2018

Is it possible to get GC verbose logs for "good" and "bad" runs (-Xverbosegclog:<file>) ?

@dmitripivkine
Copy link
Contributor

The reported reason for OOM is Excessive GC:

11:18:26.690624000  0x1A37E00 j9mm.83              Event       Forcing J9AllocateObject() to fail due to excessive GC

11:18:26.690625000  0x0 j9mm.83              Event       Forcing J9AllocateObject() to fail due to excessive GC
11:18:26.690625000  0x0 j9mm.100             Event       J9AllocateObject() returning NULL! 16 bytes requested for object of class 0x1a48700 from memory space 'Generational' id=0x7f7e380c5be0

@stevewallin
Copy link
Author

sure - will look to capture both later today. When running with -verbose:gc I didn't get OOM but will try with -Xverbosegclog - not sure if they are different ?

@dmitripivkine
Copy link
Contributor

sure - will look to capture both later today. When running with -verbose:gc I didn't get OOM but will try with -Xverbosegclog - not sure if they are different ?

no, it is the same

@pshipton
Copy link
Member

@stevewallin can you please try it with the JVM option -Djdk.nativeCrypto=false

@dmitripivkine
Copy link
Contributor

dmitripivkine commented Oct 29, 2018

There is a top of the list of class instances in the heap:

           65753             1052048  java/lang/Object
           49541           216310560  [C
           33593              537488  java/util/concurrent/atomic/AtomicReference
           32209              773016  java/net/InetSocketAddress$InetSocketAddressHolder
           32209              515344  java/net/InetSocketAddress
           29082              465312  java/lang/String
           26960              862720  java/util/concurrent/locks/ReentrantLock$NonfairSync
           26960              431360  java/util/concurrent/locks/ReentrantLock
           26947           104466368  [Ljava/lang/Object;
           19556              469344  java/util/HashMap$Node
           18775              300400  java/net/Inet4Address
           18774              450576  java/net/InetAddress$InetAddressHolder
           13520              324480  java/util/ArrayList
           13477              323448  java/lang/StringBuilder
           13460              323040  java/util/concurrent/locks/AbstractQueuedSynchronizer$ConditionObject
           13458              215328  org/eclipse/jetty/util/thread/Locker$Lock
           13458              215328  org/eclipse/jetty/util/thread/Locker
           13455              215280  java/util/concurrent/atomic/AtomicInteger
           13452              322848  java/util/concurrent/atomic/LongAdder
           13437             1182456  [Lorg/eclipse/jetty/http/HttpField;
           13436              214976  org/eclipse/jetty/http/HttpFields
           12940              724640  java/nio/HeapByteBuffer
            9391              225384  java/io/FileDescriptor
            9386             1126320  sun/nio/ch/SocketChannelImpl
            9386              525616  sun/nio/ch/SocketAdaptor
...
            6493           103438408  [Ljava/lang/String;

However I have no idea how top of the list looks like for "good" run

@pshipton
Copy link
Member

can you please try it with the JVM option -Djdk.nativeCrypto=false

If its not easy to modify the command line, this option can be set in the IBM_JAVA_OPTIONS environment variable. If that isn't feasible either, edit the jre/lib/amd64/compressedrefs/options.default file and add the option there on the next line.

@stevewallin
Copy link
Author

Can add options easily to the docker file CMD which starts the run... will run once I get home as it's setup on my machine there :)

@stevewallin
Copy link
Author

@pshipton -Djdk.nativeCrypto=false doesn't change the OOM behaviour - capturing gc docs

@DanHeidinga
Copy link
Member

0.11.0 enabled -XX:+ContainerSupport by default - see #3184. I'd like to rule out the container support changes as a cause if we can.

@stevewallin can you try running with -XX:-UseContainerSupport?

@stevewallin
Copy link
Author

@DanHeidinga no change with -XX:-UseContainerSupport... I have just had an OOM with a previous JVM version run when collecting GC log so resetting all the containers and rebuilding/running incase something is amis... :/

@stevewallin
Copy link
Author

sorry - looks like user error - the script didn't run the test at previous versions for as long for the old J9 so didn't hit the OOM issue - HS runs clean in all scenarios for this test, but J9 gets OOM consistently on both versions...

@stevewallin
Copy link
Author

ok

  • so Docker has 8Gb memory limit for instance
  • if I assign -mx2G with J9 heap usage stabilises around the 1.4G mark and all runs ok.
    with no command line options :
  • HS heap usage goes up to about 2Gb
  • J9 goes up to about 600Mb then OOM errors

any reason why J9 is constrained with default options and 8Gb available RAM ?

@pshipton
Copy link
Member

The default -Xmx is 512m for jdk8. For historical reasons really, I suppose we should consider changing it.
https://www.eclipse.org/openj9/docs/openj9_defaults/

@pshipton
Copy link
Member

I've created #3467 to consider changing the default -Xmx for jdk8, and closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants