javalinuxserverram

Out of memory Exception (Linux debian Server)


I have a Linux server (Debian 10, 16GB physical memory) with three docker containers running on it and two Java programs that perform financial calculations.

Docker containers:

So nothing special.

Java-Programs:

These programs are started as follows. Note the -Xmx10g parameter:

./java/jdk/bin/java -Xmx10g -jar portal-backend-1.0-SNAPSHOT-runner.jar 2>&1 >> log/portal.log &

./java/jdk/bin/java -Xmx10g -jar pea-backend-1.0-SNAPSHOT-runner.jar 2>&1 >> log/pea.log &

SWAP-File:

Configuration of my 24GB swap-file:

fallocate -l 24G /swapfile

chown root:root /swapfile

sudo chmod 0600 /swapfile

Format swapfile:

mkswap /swapfile

Activate swapfile:

swapon /swapfile

In the file "/etc/fstab" added:

/swapfile swap swap defaults 0 0

When I now type swapon -s for status I get the following message:

Filename Type Size Used Priority
/swapfile file 25165820 0 -2

PROBLEM:

Even though I have activated a SWAP file, I get an out-of-memory exception. The interesting thing is that the SWAP file is always 0KB. It seems like it will never be attacked. What is wrong?

Attached a picture of htop shortly before I get the message Out-of-memory-Exception enter image description here

//EDIT

By the way, I don't understand why I'm getting minus points for this topic


Solution

  • To answer at least your question, why you get the OutOfMemoryError although your swap file is not used:

    Both things are to be viewed separately. You assigned a fixed maximum amount of RAM to be used by your Java process with the option -Xmx10g. Your JVM will not request more heap memory from the operating system but rather throw that error you observed.

    Since the JVM has more memory need than just heap memory you see in the htop screenshot that the crashing process is using 13.9MB of the operating systems memory. This seems feasible, you could observe it a bit.

    So when actually the overuse of heap is the problem, you likely have a memory leak in your program (e.g. circular references or collections that are never cleared although you don't need their contents or similar) or it has to process some really big chunk of data but was not prepared to do so in a "streaming" way (e.g. for network requests) and instead tries to allocate all heap at once.

    Just increasing the maximum available heap could help temporarily, but if it's a big chunk of input data to process, the next even bigger chunk will crash the program again.

    If the problem is a memory leak, bigger heap space will probably lead to a situation where garbage collection takes longer and longer and your programm will not be responsive in that time where garbage collection is running.

    To get rid of that crash, you will have to do more investigation if it is just "time based" or "input based". Time based means that after running for a certain amount of time, the program crashes typically (if under normal load). Input based means that special inputs trigger the crash (could be big inputs or inputs that trigger special calculations).