javaspringspring-mvc

Spring-Web OOM when streaming after update from 6.0.23 to version 6.1.4 and above


We have an application where we are streaming files from the filesystem and are PUTting them to an endpoint. This worked fine up to spring-web version 6.0.23.

After updating to version 6.1.4, we persistently encountered the following OutOfMemoryError:

10:48:11,632 ERROR org.springframework.scheduling.support.TaskUtils$LoggingErrorHandler   %NHET service - ourCloudUnawareScheduler-6 - Unexpected error occurred in scheduled task
java.lang.OutOfMemoryError: Java heap space
    at org.springframework.util.FastByteArrayOutputStream.addBuffer(FastByteArrayOutputStream.java:325) ~[spring-core-6.1.4.jar:6.1.4]
    at org.springframework.util.FastByteArrayOutputStream.write(FastByteArrayOutputStream.java:126) ~[spring-core-6.1.4.jar:6.1.4]
    at org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStream.java:92) ~[commons-io-2.11.0.jar:2.11.0]
    at java.base/java.security.DigestOutputStream.write(DigestOutputStream.java:143) ~[?:?]
    at java.base/java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:261) ~[?:?]
    at java.base/java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:210) ~[?:?]
    at java.base/java.util.zip.GZIPOutputStream.write(GZIPOutputStream.java:148) ~[?:?]
    at org.apache.commons.compress.utils.CountingOutputStream.write(CountingOutputStream.java:62) ~[commons-compress-1.23.0.jar:1.23.0]
    at org.apache.commons.compress.utils.FixedLengthBlockOutputStream$BufferAtATimeOutputChannel.write(FixedLengthBlockOutputStream.java:91) ~[commons-compress-1.23.0.jar:1.23.0]
    at org.apache.commons.compress.utils.FixedLengthBlockOutputStream.writeBlock(FixedLengthBlockOutputStream.java:259) ~[commons-compress-1.23.0.jar:1.23.0]
    at org.apache.commons.compress.utils.FixedLengthBlockOutputStream.maybeFlush(FixedLengthBlockOutputStream.java:169) ~[commons-compress-1.23.0.jar:1.23.0]
    at org.apache.commons.compress.utils.FixedLengthBlockOutputStream.write(FixedLengthBlockOutputStream.java:206) ~[commons-compress-1.23.0.jar:1.23.0]
    at org.apache.commons.compress.archivers.tar.TarArchiveOutputStream.write(TarArchiveOutputStream.java:713) ~[commons-compress-1.23.0.jar:1.23.0]

After a lot of digging we found that the implementation was refactored quite significantly from version 6.0.23 to version 6.1.4 especially in the commit 033bebf.

Any version of spring-web starting with 6.1.x is using a FastByteArrayOutputStream while processing a request at: AbstractClientHttpRequest:getBody -> AbstractStreamingClientHttpRequest:getBodyInternal -> new FastByteArrayOutputStream

Before (e.g. spring-web version 6.0.23), AbstractClientHttpRequest:getBody -> SimpleStreamingClientHttpRequest:getBodyInternal -> HttpURLConnection:getOutputStream

The implementation using the FastByteArrayOutputStream is buffering the whole request in memory which will trigger an OOM exception at some point when the request is too large.

Related Questions

There was a similar question but this one went into a totally different direction:

Spring boot Java heap space for downloading large files

Reproducer

To show this OOM live, I created a reproducer so anyone can experience it. I copied in some of the 6.0.23 classes so both working and broken implementations can be shown in parallel.

In the reproducer, I trimmed down the streams compared to our own implementation so it just uses a FileInputStream to stream some mock data to the OutputStream from the request body. And the receiving endpoint just drains the stream.

The details can be found in this java class.

Don't forget to limit the available memory by adding -Xmx500m when running the reproducer.

Relevant code

Triggering the streaming over the RestTemplate works the same for old and new. In the reproducer the RestTemplate is instantiated differently since we want to have both old and new in the same project.

  @PostMapping("/startBroken")
  public void startBroken() throws IOException {
    Path sourcePath = createTestFile();
    RequestCallback requestCallback =
        request -> {
          try (OutputStream os = request.getBody();
              FileInputStream fis = new FileInputStream(sourcePath.toFile())) {
            doStream(fis, os);
          }
        };

    // The important difference is the version of the restTemplate
    restClientBroken()
        .execute("http://localhost:8080/putData", HttpMethod.PUT, requestCallback, null);
    deleteTestFile(sourcePath);
  }

  private void doStream(InputStream inputStream, OutputStream outputStream) throws IOException {
    byte[] buffer = new byte[65536];
    int bytesRead;
    long totalBytesProcessed = 0;
    while ((bytesRead = inputStream.read(buffer)) != -1) {
      totalBytesProcessed += bytesRead;
      outputStream.write(buffer, 0, bytesRead);
      System.out.println("Processed " + totalBytesProcessed + " bytes");
    }
    outputStream.flush();
  }

RestTemplate

We did not change the instantiation of the two RestTemplates. But in the reproducer you'll find restClientBroken and restClientWorking where the working one uses local copies of the old spring web implementation:

  private RestTemplate restClientBroken() {
    SimpleClientHttpRequestFactory requestFactory = new SimpleClientHttpRequestFactory();
    requestFactory.setConnectTimeout(2_000);
    requestFactory.setReadTimeout(29_000);
    requestFactory.setBufferRequestBody(false);
    requestFactory.setChunkSize(4096);

    RestTemplate restTemplate = new RestTemplate();
    restTemplate.setRequestFactory(requestFactory);
    return restTemplate;
  }

Test file

Any large enough file will do (with -Xmx500m a 800MB file is enough):

  private Path createTestFile() throws IOException {
    Path path = Files.createTempFile("streaming-test-source-", ".tmp");
    try (final BufferedWriter bufferedWriter = Files.newBufferedWriter(path)) {
      for (int i = 0; i < 30_000; i++) {
        int outerCount = i * 1000;
        for (int j = 0; j < 1_000; j++) {
          bufferedWriter.write("This is line " + outerCount + i + "\n");
        }
        bufferedWriter.flush();
      }
    }
    return path;
  }

Streaming sink

The example just drains the stream to simulate a consumer:

  @PutMapping("/putData")
  public void receiver(HttpServletRequest request) throws IOException {
    StreamUtils.drain(request.getInputStream());
  }

Questions

Now, my questions are:


Solution

  • Not sure if that is an intentional change or not, but you could work around it by directly setting the body instead of getting the OutputStream.

    @PostMapping("/startBroken")
    public void startBroken() throws IOException {
      Path sourcePath = createTestFile();
      RequestCallback requestCallback =
        request -> {
          if (request instanceof StreamingHttpOutputMessage shom) {
                System.out.println("Using Body");
                shom.setBody(out ->
                {
                  try (var fis = new BufferedInputStream(new FileInputStream(sourcePath.toFile()))) {
                    doStream(fis, out);
                  }
                });
              } else {
                System.out.println("Using OutputStream");
                try (var fis = new BufferedInputStream(new FileInputStream(sourcePath.toFile()))) {
                  doStream(fis, request.getBody());
                }
              }
            };
        // The important difference is in the restClient
        restClientBroken()
            .execute("http://localhost:8080/putData", HttpMethod.PUT, requestCallback, null);
        deleteTestFile(sourcePath);
      }
    
      private void doStream(InputStream in, OutputStream out) throws IOException {
        StreamUtils.copy(in, out);
      }
    

    Nonetheless I would probably still register an issue explaining the situation.