javafusejnr

Fuse filesystem in java - JVM error double free or corruption


I'm writing a Fuse-Filesystem in java using the jnr-fuse library (https://github.com/SerCeMan/jnr-fuse) which internally uses JNR for native access.

The filesystem works as a frontend to an Amazon S3 bucket, basically enabling a user to mount their bucket as a normal storage device.

While reworking my read method, I came across following JVM error:

*** Error in `/usr/local/bin/jdk1.8.0_65/bin/java': double free or corruption (!prev): 0x00007f3758953d80 ***

The error always happens while trying to copy a file from the fuse-filesystem to the local FS, typically on the second invocation of the read method (for the second 128kByte block of data)

 cp /tmp/fusetest/benchmark/benchmarkFile.large /tmp

The read method in question is:

public int read(String path, Pointer buf, @size_t long size, @off_t long offset, FuseFileInfo fi) {
    LOGGER.debug("Reading file {}, offset = {}, read length = {}", path, offset, size);
    S3fsNodeInfo nodeInfo;
    try {
        nodeInfo = this.dbHelper.getNodeInfo(S3fsPath.fromUnixPath(path));
    } catch (FileNotFoundException ex) {
        LOGGER.error("Read called on non-existing node: {}", path);
        return -ErrorCodes.ENOENT();
    }
    try {
        // *** important part start
        InputStream is = this.s3Helper.getInputStream(nodeInfo.getPath(), offset, size);
        byte[] data = new byte[is.available()];
        int numRead = is.read(data, 0, (int) size);
        LOGGER.debug("Got {} bytes from stream, putting to buffer", numRead);
        buf.put(offset, data, 0, numRead);
        return numRead;
        // *** important part end
    } catch (IOException ex) {
        LOGGER.error("Error while reading file {}", path, ex);
        return -ErrorCodes.EIO();
    }
}

The input stream used is in fact a ByteArrayInputStream on a buffer that I'm using to reduce http communication with S3. I'm running fuse in single-thread mode for now to avoid any concurrency-related issues.

Interestingly, I already had a working version that did not do any internal caching, but otherwise was exactly the same as shown here.

Unfortunately I'm not really into JVM internals, so I'm not sure on how to get to the bottom of this - normal debugging yields nothing as the actual error seems to be happening on the C end.

Here's the full console output of the read operation triggered by above command:

2016-02-29 02:08:45,652 DEBUG s3fs.fs.CacheEnabledS3fs [main] - Reading file /benchmark/benchmarkFile.large, offset = 0, read length = 131072
unique: 7, opcode: READ (15), nodeid: 3, insize: 80, pid: 8297
read[0] 131072 bytes from 0 flags: 0x8000
2016-02-29 02:08:46,024 DEBUG s3fs.fs.CachedS3Helper [main] - Getting data from cache - path = /benchmark/benchmarkFile.large, offset = 0, length = 131072
2016-02-29 02:08:46,025 DEBUG s3fs.fs.CachedS3Helper [main] - Path /benchmark/benchmarkFile.large not yet in cache, add it
2016-02-29 02:08:57,178 DEBUG s3fs.fs.CachedS3Helper [main] - Path /benchmark/benchmarkFile.large found in cache!
   read[0] 131072 bytes from 0
   unique: 7, success, outsize: 131088
2016-02-29 02:08:57,179 DEBUG s3fs.fs.CachedS3Helper [main] - Starting actual cache read for path /benchmark/benchmarkFile.large
2016-02-29 02:08:57,179 DEBUG s3fs.fs.CachedS3Helper [main] - Reading data from cache block 0, blockOffset = 0, length = 131072
2016-02-29 02:08:57,179 DEBUG s3fs.fs.CacheEnabledS3fs [main] - Got 131072 bytes from stream, putting to buffer
2016-02-29 02:08:57,180 DEBUG s3fs.fs.CacheEnabledS3fs [main] - Reading file /benchmark/benchmarkFile.large, offset = 131072, read length = 131072
unique: 8, opcode: READ (15), nodeid: 3, insize: 80, pid: 8297
read[0] 131072 bytes from 131072 flags: 0x8000
2016-02-29 02:08:57,570 DEBUG s3fs.fs.CachedS3Helper [main] - Getting data from cache - path = /benchmark/benchmarkFile.large, offset = 131072, length = 131072
2016-02-29 02:08:57,570 DEBUG s3fs.fs.CachedS3Helper [main] - Path /benchmark/benchmarkFile.large found in cache!
2016-02-29 02:08:57,570 DEBUG s3fs.fs.CachedS3Helper [main] - Starting actual cache read for path /benchmark/benchmarkFile.large
2016-02-29 02:08:57,571 DEBUG s3fs.fs.CachedS3Helper [main] - Reading data from cache block 0, blockOffset = 131072, length = 131072
2016-02-29 02:08:57,571 DEBUG s3fs.fs.CacheEnabledS3fs [main] - Got 131072 bytes from stream, putting to buffer
   read[0] 131072 bytes from 131072
   unique: 8, success, outsize: 131088
*** Error in `/usr/local/bin/jdk1.8.0_65/bin/java': double free or corruption (!prev): 0x00007fcaa8b30c80 ***

Solution

  • Ok, this was a stupid mistake really...

    buf.put(offset, data, 0, numRead);
    

    is of course nonsense - the passed offset parameter means offset in the file that's being read and not in the buffer.

    Works with:

    buf.put(0, data, 0, numRead);
    

    The rather cryptic error just means that I'm trying to write memory locations I've no business writing to in that case. Curious though why it is this error message rather than a segfault which I'd normally expect here..