javamemory-leaksgarbage-collectionheap-memorynetty

Netty 4.0 HTTP Chunks memory leaks?


I'm trying to make HTTP Transfer Encoding Chunked work with Netty 4.0.

I had success with it so far. It works well with small payloads.

Then I tried with large data, it started to hang.

I suspect there might be a problem with my code, or maybe a leak with ByteBuf.copy().

I stripped down my code to the bare minimum to be sure that I had no other source of leak or side effect and I've ended down to write this test. The complete code is here.

Basically it sends 1GB of 0x0 when you connect with wget to port 8888. I reproduce the problem when I connect with

wget http://127.0.0.1:8888 -O /dev/null

Here's the handler :

    protected void channelRead0(ChannelHandlerContext ctx, FullHttpMessage msg) throws Exception {
        DefaultHttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);
        HttpHeaders.setTransferEncodingChunked(response);
        response.headers().set(CONTENT_TYPE, "application/octet-stream");
        ctx.write(response);
            
        ByteBuf buf = Unpooled.buffer();
        int GIGABYTE = (4 * 1024 * 1024); // multiply 256B = 1GB
        for (int i = 0; i < GIGABYTE; i++) {
            buf.writeBytes(CONTENT_256BYTES_ZEROED);
            ctx.writeAndFlush(new DefaultHttpContent(buf.copy()));
            buf.clear();
           }
           ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT).addListener(ChannelFutureListener.CLOSE);
    }

Is there anything wrong with my approach?

EDIT :

With VisualVM I've found that there is a memory leak in the ChannelOutboundBuffer.

The Entry[] buffer keeps growing, addCapacity() is called multiple times. The Entry array seems to contains copies of the buffers that are (or should) be written to the wire.

I see with wireshark data coming in...

Here's a Dropbox link to the heapdump


Solution

  • I have found what I was doing wrong.

    The for loop that writeAndFlush() was not working well and is likely to be cause of the leak.

    I tried various things (see many revisions in the gist link). See the gist version at the time of writing.

    I have found out that the best way to achieve what I wanted to do without memory leaks was to extends InputStream and write to the context (not using writeAndFlush()) the InputStream wrapped in an io.netty.handler.stream.ChunkedStream.

        DefaultHttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);
        HttpHeaders.setTransferEncodingChunked(response);
        response.headers().set(CONTENT_TYPE, "application/octet-stream");
        ctx.write(response);
        InputStream is = new InputStream() {
          int offset = -1;
          byte[] buffer = null;
    
    
          @Override
          public int read() throws IOException {
              if (offset == -1 || (buffer != null && offset == buffer.length)) {
                fillBuffer();
              }
              if (buffer == null || offset == -1) {
                return -1;
              }
              while (offset < buffer.length) {
                int b = buffer[offset];
                offset++;
                return b;
              }
              return -1;
          }
    
          // this method simulates an application that would write to
          // the buffer.
    
          // ONE GB (max size for the test;
          int sz = 1024 * 1024 * 1024; 
    
          private void fillBuffer() {
            offset = 0;
            if (sz <= 0) { // LIMIT TO ONE GB
                buffer = null;
                return;
            }
            buffer = new byte[1024];
            System.arraycopy(CONTENT_1KB_ZEROED, 0,
                    buffer, 0,
                    CONTENT_1KB_ZEROED.length);
            sz -= 1024;
          }
    };
    
    
    ctx.write(new ChunkedStream(new BufferedInputStream(is), 8192));
    ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT).addListener(ChannelFutureListener.CLOSE);
    

    The code is writing 1GB of data to the client in 8K chunks. I was able to run 30 simultaneous connection without memory or hanging problems.