chroniclechronicle-bytes

chronicle-bytes raising a segfault


following the accepted solution in chronicle-bytes shared DirectBytesStores I have now set up my code in the same way to the accepted answer.

I'm generating 1,000,000 objects that I write out to a MappedFile and I would like each object to be able to manage their own reads/writes to the MappedFile:

public class DataObject {

    public static final int LENGTH = 12;
    private static final int A_OFFSET = 0;
    private static final int B_OFFSET = 4;

    private PointerBytesStore bytes;

    public DataObject(long memoryAddress) {
        this.bytes = new PointerBytesStore();
        this.bytes.set(memoryAddress, LENGTH)
    }

    public int getA() {
        return this.bytes.readInt(A_OFFSET);
    }

    public void setA(int a) {
        this.bytes.writeInt(a);
    }

    ...
}

Then I create DataObject with:

MappedFile file = MappedFile.mappedFile(new File(tmpfile), 64 << 10);
MappedBytes mappedBytes = MappedBytes.mappedBytes(mappedFile);
int offset = 0;
List<DataObject> myList = new ArrayList<>();
for(i = 0; i < 1000000; i++) {
    int address = mappedBytes.addressForRead(offset);
    myList.add(new DataObject(address));
    offset += DataObject.LENGTH;
}

I have found, using code similar to above, that chronicle-bytes generates a segfault once I reach ~100,000 objects. The segfault tends to happen when trying to read or write to the PointerBytesStore but is not predictable.

Is this a bug in chronicle-bytes or is am I misusing the library? Any help/suggestions/recommendations would be greatly appreciated.


Solution

  • MappedFile maps in chunks of memory at a time. Unless you retain those chunks by reserving them, the memory is released when it no longer uses them.

    One solution is to use a large chunk so you only ever use one chunk.

    Another approach is to.use Chronicle Map as it will manage the memory as required.