You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Jetty version(s)
11.0.20
Java version/vendor (use: java -version)
17.0.11
Spring boot version(s)
2.7.21
OS type/version
Linux
We are using jetty over http/2 on a ipv6 setup with kubernetes deployment
we are trying to see the fault tolerance of our system on unstable network( we are peforming this exercise beacuse in actually faulty network our application was crashing).For that, we have used netem module of linux to simulate packet loss
tc qdisc add dev eth0 root netem limit 2000 loss 20%
here we are simulating packet loss of 20% with running traffic
we are observing request timeouts which was expected behaviour for us but along with this we are experiencing our heap memory getting completely full triggering high gc cycles. this ultimately leads to our application getting crashed with 137 error code from kuberentes.
in a faulty network also we expect our application to just give error status codes but not actually crash itself.
in a healthy network, the heap memory assigened remains well under control
we took a heap dump of our application and analyzed it with eclipse memory analyzer. On this analysis, the report points out to org.eclipse.jetty.http2.client.HTTP2ClientSession as leak suspect taking the major chunk of memory.
PFA a word doc containing screenshots of the report from heap dump
We know we are not on a supported version but are in the process of migrating. So If you could look at the report and point us to why this behaviour could be happening, would be really helpful.
@joakime yes, we understand that and we are already in the process of migrating to jetty 12. But till then, we would like your input on the leak suspect class of jetty client as to why this could be happening and if the same behaviour would stop happening in jetty 12.
From your document, the actual cause of OOME is: "Cannot reserve 1048676 bytes of direct buffer memory".
So you have a direct buffer memory issue in your load test, which is orthogonal to the memory occupied by the heap that your document shows.
Having said that, there appear to be a large Deque for slots (about 21k entries), but it's not clear how many elements are actually present.
The slot entries are also reported as various object types, which is strange because they should either be ControlEntry or DataEntry.
There is too little information for us to proceed: your document shows weird information, you are running in extreme conditions for which I don't think it is worthwhile optimizing unless there is clear evidence that we can do better.
What seems to have happened is that your JVM crashed because it could not allocate native memory, due to the extreme conditions of your tests.
It's not evident that we should do something in Jetty.
Jetty version(s)
11.0.20
Java version/vendor (use: java -version)
17.0.11
Spring boot version(s)
2.7.21
OS type/version
Linux
We are using jetty over http/2 on a ipv6 setup with kubernetes deployment
we are trying to see the fault tolerance of our system on unstable network( we are peforming this exercise beacuse in actually faulty network our application was crashing).For that, we have used netem module of linux to simulate packet loss
tc qdisc add dev eth0 root netem limit 2000 loss 20%
here we are simulating packet loss of 20% with running traffic
we are observing request timeouts which was expected behaviour for us but along with this we are experiencing our heap memory getting completely full triggering high gc cycles. this ultimately leads to our application getting crashed with 137 error code from kuberentes.
in a faulty network also we expect our application to just give error status codes but not actually crash itself.
in a healthy network, the heap memory assigened remains well under control
we took a heap dump of our application and analyzed it with eclipse memory analyzer. On this analysis, the report points out to org.eclipse.jetty.http2.client.HTTP2ClientSession as leak suspect taking the major chunk of memory.
PFA a word doc containing screenshots of the report from heap dump
We know we are not on a supported version but are in the process of migrating. So If you could look at the report and point us to why this behaviour could be happening, would be really helpful.
heapdumpanalysis.docx
The text was updated successfully, but these errors were encountered: