node.jsamazon-web-servicesamazon-s3

Is preventing overwriting existing S3 object atomic?


const {
  S3Client,
  PutObjectCommand,
  GetObjectCommand,
  DeleteObjectCommand,
} = require("@aws-sdk/client-s3");


      const params = {
        Bucket: process.env.AWS_S3_PHOTO_BUCKET,
        Key: photo_name.toString(),
        Body: resized_img,
        ContentType: req.file.mimetype,
        IfNoneMatch: "*",
      };
      const command = new PutObjectCommand(params);

      await AWS_S3.send(command);

The IfNoneMatch: "*" flag ensures that the operation will only write the new object if an existing object with the same Key/name doesn't already exist.

What if there are two concurrent requests of above sent to the S3 bucket at once?

Is ACID guaranteed? Will only one of those two write queries succeed?


Solution

  • Yes, this would be atomic.

    As documented, S3 is strongly consistent:

    Updates to a single key are atomic. For example, if you make a PUT request to an existing key from one thread and perform a GET request on the same key from a second thread concurrently, you will get either the old data or the new data

    Further, this scenario is called out:

    If multiple conditional writes occur for the same object name, the first write operation to finish succeeds. Amazon S3 then fails subsequent writes with a 412 Precondition Failed response.

    The documentation goes on to describe a race condition where an object is deleted and newly created at the same instant, you might get a 409 response that you should consider adding retry logic for.