javaamazon-web-servicesspring-bootamazon-s3

Copying objects using S3Client is timing out because the objects are too large


I am working with a Spring Boot project and have an S3Client (which is implemented by the DefaultS3Client class) bean that is responsible for copying content in an S3 bucket to another bucket using the copyObject method. For large files (> 1.7GB) the copy operation fails because the HTTP request times out.

Here is my code that initiates the copy:

CopyObjectRequest copyObjectRequest = CopyObjectRequest.builder()
                    .sourceBucket(<the bucket name>)
                    .sourceKey(<whatever source path I specify>)
                    .destinationBucket(<the other bucket name>)
                    .destinationKey(<whatever target path I specify>)
                    .contentType("application/octet-stream")
                    .build();

            s3Client.copyObject(copyObjectRequest);

The operation fails when that s3Client.copyObject(copyObjectRequest) is invoked for large files. This is the root of the exception:

software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Read timed out

at software.amazon.awssdk.services.s3.DefaultS3Client.copyObject(DefaultS3Client.java:960) ~[s3-2.21.41.jar!/:na]

My suspicion is that this is because I have not set any timeout default on the S3Client bean that I am using. Here is how the bean is declared:

public S3Client myS3Bean(AwsCredentialsProvider awsCredentialsProvider, String region, String endpoint) throws URISyntaxException {

    S3ClientBuilder builder = S3Client.builder()
            .region(Region.of(region))
            .credentialsProvider(awsCredentialsProvider)
            .endpointOverride(new URI(endpoint));
    return builder.build();
}

Is it possible configure the timeout on this bean?


Solution

  • You are using the wrong API for larger files. Instead of using the base s3Client, you should be using software.amazon.awssdk.transfer.s3.S3TransferManager API. The Amazon S3 Transfer Manager API can certainly handle files up to and including 1.7GB or higher.

    See example of this API in AWS Code example github here:

    https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/javav2/example_code/s3/src/main/java/com/example/s3/transfermanager/UploadFile.java