I'm trying to tune some kind of hi-load application, which streaming data from one cloud to other with some preprocessing. The specific of my application is extensive memory usage and low CPU consumption. I monitored the app with jconsole and reached some interesting picture - cpu is loaded up to 15% and I'm still catching the out of memory error.
Manual triggering for "Perform GC" from jconsole is clearing a lot of memory in all generations so I assume that there is no memory leak in application.
My application is running on mesos/marathon, so I tried to switch between single virtual CPU to multi CPU with various GCs (-XX:+UseG1GC; -XX:+UseParallelGC without other tuning) and the picture is actually the same;
I think I should share the result of my investigations.
"Out of memory" - is what I've got from DevOps-guy, and first thing I imagine - OutOfMemoryException. So, thanks Alex for clarifying question.
In my case It was OOMKill in docker environment from underlying OS. I allocated 1G for container and restricted the java heap size with 736m. But my application used netty which allocated own memory buffers bypassing the heap. So, when more connection appears the netty allocated more direct buffers which leads to OOMKill in spite of healthy heap.