I am building a GIS module for a system using ArcGIS Java Map SDK v200.0.0. The general functionality flow is, I connect to a few Esri servers, pull down some features/data using their APIs, run a few calculations, and write to a file. Since building my module and running the calculations on a local test server I noticed our virtual memory usage kept climbing higher and higher with each subsequent calculation (until ultimately our application crashes). So clearly this sounds like some form of memory leak.
Firstly, I investigated the memory usage of the JVM, thinking perhaps some large objects are not being garbage collected. I mapped our memory usage in VisualVM as seen here, however, everything is performing as expected. Memory usage spikes for each new calculation, and then all unreferenced objects are garbage collected at the end of the calculation. I also checked our metaspace usage, but that never exceeds 50MB (so metaspace is not the issue). I therefore am running into some form of native memory leak.
Because the memory leak is outside of the JVM, the most typical culprits are some form of file stream not being closed. I do write to a file at the end of my calculation but I DO close it as seen here:
. . .
//map here is an esri ArcGISMap object that holds some data and must be loaded fully
map.addDoneLoadingListener(() -> {
if (map.getLoadStatus() == LoadStatus.LOADED) {
String mapJson = map.toJson();
FileWriter jsonFileWriter = null;
try {
//file here is a valid File already created
jsonFileWriter = new FileWriter(file);
jsonFileWriter.write(mapJson);
mapDoneWriting = true;
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (jsonFileWriter != null) {
jsonFileWriter.flush();
jsonFileWriter.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
} else
throw new IllegalArgumentException("Error writing json map file");
});
. . .
To further chase this leak I discovered this blog here and here which seemed to have a very similar obscure memory leak. The blog posts in summary: other large java systems had a native memory leak that could not be found, and both blogs successfully debugged and later patched the leak using a tool called jemalloc (spoiler, the culprit was some form of Inflator/Deflator Object used for compression/decompression not being closed). Jemalloc is essentially a memory allocator exactly like malloc, but with further debugging functionalities added. I replaced the JVM's default memory allocator 'malloc' to use 'jemalloc' and then created memory usage reports in a tree structure using jeprof (a built in reporting tool with jemalloc). Now here is where my debugging is hitting a road block.
Here are the reports generated from jeprof (zoomed into the most likely culprits): Overview
Zooming into some weird potential culprits: Weirdness
And further weirdness: Weirdness 2
Here is where I need some help. I am interpreting these results based solely off of the other examples I have seen from similar results (i.e. each blog post I link shows an example, I could not find documentation detailing interpreting this graph). From my understanding, the bottom percentage is the percentage of total memory used from my application from that method. Which then points to 90% of my memory used at RT_Vector_setElementRemovedCallback, and 70% of that is in lerc_decodeToDouble (?). I found the LERC project, which has a github repository: Lerc Repository Correct me if I am interpreting these results wrongly.
Does this appear to be a Esri memory leak in the SDK? Or am I interpreting these results wrong? As anyone who has chased a memory leak can understand, knowing where to go next is difficult. If there are any other places I should check, Please let me know. Thank you.
FOLLOW UP:
There appears to be an Esri memory leak for Linux systems only. The Esri developers are debugging now, and my initial interpretations were correct.