I created an S3 bucket with terraform 1.0.11
resource "aws_s3_bucket" "this" {
bucket = "examplebucket"
tags = {
Name = "examplebucket"
}
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_ownership_controls" "this" {
bucket = aws_s3_bucket.this.id
rule {
object_ownership = "BucketOwnerPreferred"
}
}
resource "aws_s3_bucket_acl" "this" {
depends_on = [aws_s3_bucket_ownership_controls.this]
bucket = aws_s3_bucket.this.id
acl = "private"
}
This worked. However, I'm now going back and adding the following bucket policy to the bucket to limit access to it from an AWS VPC private endpoint (not shown here)
resource "aws_s3_bucket_policy" "this" {
bucket = aws_s3_bucket.this.id
policy = <<EOF
{
"Version":"2012-10-17",
"Id":"examplebucket-policy",
"Statement":[{
"Sid":"BucketManagment",
"Effect":"Allow",
"Principal":{
"AWS":"arn:aws-us-gov:iam::${data.aws_caller_identity.current.account_id}:root"
},
"Action":"s3:*",
"Resource":[
"arn:aws-us-gov:s3:::${aws_s3_bucket.this.bucket}",
"arn:aws-us-gov:s3:::${aws_s3_bucket.this.bucket}/*"
]
},{
"Sid":"AllowCRUDFromVPCEOnly",
"Principal":"*",
"Action":[
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject",
"s3:ListBucket"
],
"Effect":"Deny",
"Resource":[
"arn:aws-us-gov:s3:::${aws_s3_bucket.this.bucket}",
"arn:aws-us-gov:s3:::${aws_s3_bucket.this.bucket}/*"
],
"Condition":{
"StringNotEquals":{
"aws:sourceVpce":"${var.vpce_id}"
}
}
}]
}
EOF
}
The new policy applies fine. and I'm able to access the bucket as expected via the aws cli using the endpoint.
However whenever I run terraform plan now it says the object has been deleted outside of terraform
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
# module.mybucket.aws_s3_bucket.this has been deleted
- resource "aws_s3_bucket" "this" {
- arn = "arn:aws-us-gov:s3:::examplebucket" -> null
- bucket = "examplebucket" -> null
- bucket_domain_name = "examplebucket.s3.amazonaws.com" -> null
- bucket_regional_domain_name = "examplebucket.s3.us-gov-west-1.amazonaws.com" -> null
- hosted_zone_id = "###########" -> null
- id = "examplebucket" -> null
- object_lock_enabled = false -> null
- region = "us-west-1" -> null
- request_payer = "BucketOwner" -> null
- tags = {
- "Name" = "examplebucket"
} -> null
- tags_all = {
- "Name" = "examplebucket"
} -> null
- grant {
- id = "2a05a4f3cca#######################d3fe9273" -> null
- permissions = [
- "FULL_CONTROL",
] -> null
- type = "CanonicalUser" -> null
}
- server_side_encryption_configuration {
- rule {
- bucket_key_enabled = false -> null
- apply_server_side_encryption_by_default {
- kms_master_key_id = "arn:aws-us-gov:kms:us-west-1:########:key/44a2###########991" -> null
- sse_algorithm = "aws:kms" -> null
}
}
}
- versioning {
- enabled = false -> null
- mfa_delete = false -> null
}
}
I confirmed that if I manually delete the S3 bucket policy terraform does not complain about the actual bucket being deleted. What policy is needed to allow terraform to manage the bucket properly?
I think the best we can do here is to read the source code for the implementation of aws_s3_bucket
in the hashicorp/aws
provider to learn what its rule is for deciding that a bucket no longer exists.
When Terraform reports "Objects have changed outside of Terraform" that means that Terraform asked the provider to "read" the remote object, and the result was different than what was saved at the end of the previous apply. So in this case we need to look in this resource type's implementation of "read", which at the time of writing starts as follows:
func resourceBucketRead(ctx context.Context, d *schema.ResourceData, meta any) diag.Diagnostics {
var diags diag.Diagnostics
conn := meta.(*conns.AWSClient).S3Client(ctx)
_, err := findBucket(ctx, conn, d.Id())
if !d.IsNewResource() && tfresource.NotFound(err) {
log.Printf("[WARN] S3 Bucket (%s) not found, removing from state", d.Id())
d.SetId("")
return diags
}
// [...]
For the old SDK that this resource type is written with, calling d.SetId("")
and then returning is how the provider reports "no longer exists", and so from this initial code we can learn that the provider considers a bucket to not exist if this findBucket
function returns an error that tfresource.NotFound
classifies as representing "not found".
So let's now look at findBucket
:
func findBucket(ctx context.Context, conn *s3.Client, bucket string, optFns ...func(*s3.Options)) (*s3.HeadBucketOutput, error) {
input := s3.HeadBucketInput{
Bucket: aws.String(bucket),
}
output, err := conn.HeadBucket(ctx, &input, optFns...)
// For directory buckets that no longer exist it's the CreateSession call invoked by HeadBucket that returns "NoSuchBucket",
// and that error code is flattend into HeadBucket's error message -- hence the 'errs.Contains' call.
if tfawserr.ErrHTTPStatusCodeEquals(err, http.StatusNotFound) || tfawserr.ErrCodeEquals(err, errCodeNoSuchBucket) || errs.Contains(err, errCodeNoSuchBucket) {
return nil, &retry.NotFoundError{
LastError: err,
LastRequest: input,
}
}
if output == nil {
return nil, tfresource.NewEmptyResultError(input)
}
return output, nil
}
This calls the AWS SDK method HeadBucket
, whose documentation says:
If the bucket does not exist or you do not have permission to access it, the HEAD request returns a generic 400 Bad Request , 403 Forbidden or 404 Not Found code. A message body is not included, so you cannot determine the exception beyond these HTTP response codes.
This suggests that the S3 API intentionally treats "does not exist" and "no permission" as equivalent, which seems to support the behavior you observed that changing the permissions caused the object to appear to vanish as far as the provider is concerned; I think we can assume that this operation is returning a result that the provider considers to be "not found".
So the final question then is exactly which S3 operation that HeadBucket
method corresponds to. That method's own source code suggests that it wraps s3:HeadBucket
.
Bucket operations and permissions states that the s3:HeadBucket
API action requires s3:ListBucket
to be listed as an allowed permission in all relevant IAM policies.
To figure out which are "all relevant IAM policies", you can refer to How Amazon S3 authorizes a request for a bucket operation, along with Policy actions for Amazon S3.