I have a machine that inserts around 502,000,000 rows into a BDB JE. An example of a key and a value is:
juhnegferseS0004-47-19332 39694.290336
All of the keys and values are roughly of the same length. The JVM is started with the following parameters:
-Xmx9G -Xms9G -XX:+UseConcMarkSweepGC -XX:NewSize=1024m -server
But still, when it reaches ~50,000,000 rows, the JVM is "Killed" (I just get the message "Killed", don't know how/by whom it gests killed). I just guess it tries to run garbage collection and then it cannot free up enough memory or something. But, with that amount of -Xmx, I would guess it should not have any problems.
I use deferredWrites and the size of log files is set to 100MB. Switching to Base API from DPL did not make any difference.
I am using JDK 6.0 and SUSE x86_64 with 12GB of RAM. There are other processes that need the rest of the RAM, hence can't really allocate more than 9GB for this insertion task.
JVM:
java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
Any tips for fixing this issue is appreciated.
There is no single solution that is right for all situation. You will have to try different GC collectors to see which one performs best at the given situation.