I read the following today:
Direct ByteBuffer objects clean up their native buffers automatically but can only do so as part of Java heap GC — so they do not automatically respond to pressure on the native heap. GC occurs only when the Java heap becomes so full it can't service a heap-allocation request or if the Java application explicitly requests it (not recommended because it causes performance problems).
I was under the impression that using Direct ByteBuffers meant you had to manually manage allocation/deallocation of native memory and that it wasn't subject to GC at all. However this article seems to say that if a GC does occur, then the direct ByteBuffer is subject to collection.
I thought that when doing off-heap storage, one of the main motivators is to avoid problems that might occur because of GC (e.g. long pauses).
The DirectByteBuffer objects are very small objects that essentially just hold a pointer into native memory. This enables zero-copy IO and allocation without expanding the managed heap.
So those objects generally don't increase GC pressure much.
What they do consume is native resources, virtual address space and possibly physical ram or swap space. If you're using memory-mapped files instead of buffers created by allocateDirect
you may be able to vastly exceed the available physical ram because the memory will be backed by disk storage (similar to swap).
The only thing you cannot do, at least through official APIs, is unmapping the memory ranges that the direct buffers point to. Instead the underlying memory will be released once the buffers themselves are collected by the GC.
TL;DR: You don't get fully manual memory management, but you do escape from some of the limitations of the managed java heap.