Detected Java Memory Leak on Google Cloud Datastore API 1.0.2 (the problem start from version 0.13.0-beta) #2093
Labels
api: datastore
Issues related to the Datastore API.
dependencies
priority: p1
Important issue which blocks shipping the next release. Will be fixed prior to next release.
Before: I've used Google Cloud Datastore API/Java 0.8.0 (beta) since December 2016 on Google App Engine Flex/Java 8/Jetty it's perfect and work well.
After: I update Google Cloud Datastore Java API to 1.0.2 (GA) last week, I detected OOM-Killer kill java process frequently. And it's only one thing that I change. So I decide to roll-back to 0.8.0. Everything looks good as the same. No OOM-Killer kill the Java process anymore.
So just need to raise this issue for you all first.
Update as of 30 May 2017:
Today, I spend time to test this case to find which version start to have OOM (out of memory) problem. Now I can assure that google-cloud-datastore 0.12.0-beta is the LAST version that OOM that kill the Java process doesn't happened on my environment. 0.13.0 onward will create the OOM problem.
I observed 2 things during this OOM problems
In Java Runtime itself, If I checked the free memory (via Runtime.getRuntime().freeMemory()) It's show a lot of memory available in free.
But If I see at the OS-level by using top command. I observed that the java memory in resident (RES) growing and growing up and take 90% of memory in the system and then process kswapd0 take action and then I see OOM-killing in the log (via Google Cloud Console Log)
So the suspect for this case, it should be the library that allocated the native memory (not allocated from the JRE itself, because of #1 freeMemory() is plenty of free available area)
I've found this topic netty/netty#6221 talking about netty library. User updated to 4.1.7-Final this users got OOM. And this user also tested with netty 4.1.6, it's the last version that work fine. Which is the same version use in google-cloud-datastore 0.12.0-beta.
Hope this help you investigate the problem of the API soon. Thanks !!
The text was updated successfully, but these errors were encountered: