amazon-web-servicesdockerterraformamazon-ecrcloud-init

Run docker image from ECR in EC2


I'm trying to initialize an EC2 instance that runs a Docker image from an ECR using a script in the user data. The user data is successfully retrieved by AWS but does not appear to execute.

My terraform script:

resource "aws_instance" "ec2" {
  ami                   = var.ami_id
  instance_type         = var.instance_type
  vpc_security_group_ids= [aws_security_group.allow_ssh.id]

  tags = {
    Name = var.instance_name
  }
    user_data = <<-EOF
        #!/bin/bash
        yum update -y
        yum install -y docker
        service docker start

        export test="test"

        export AWS_ACCESS_KEY_ID="${var.AWS_ACCESS_KEY_ID}"
        export AWS_SECRET_ACCESS_KEY="${var.AWS_SECRET_ACCESS_KEY}"
        export AWS_SESSION_TOKEN="${var.AWS_SESSION_TOKEN}"

        usermod -aG docker ec2-user
        docker login -u AWS -p $(aws ecr get-login-password --region eu-west-1) ${var.ecr_repository_url}
        docker pull ${var.ecr_repository_url}:latest
        docker run -d -p 8080:8080 ${var.ecr_repository_url}:latest
        EOF
    
    
}

   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
Last login: Wed Apr 23 08:10:44 2025 from 1*.***.***.*2
[ec2-user@ip-1**-**-**-**0 ~]$ echo $test

[ec2-user@ip-1**-**-**-**0 ~]$ docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

The issu being that I'm not connected/authoriazed to log in. Logs from the cmd $sudo cat /var/log/cloud-init-output.log

Error response from daemon: login attempt to https://4*************.dkr.ecr.eu-west-1.amazonaws.com/v2/ failed with status: 400 Bad Request
Error response from daemon: Head "https://4***********.dkr.ecr.eu-west-1.amazonaws.com/v2/*********/manifests/latest": no basic auth credentials

enter image description here


Solution

  • The fix involved changing the Docker login method to aws ecr get-login-password --region eu-west-1 | docker login -u AWS --password-stdin ${var.ecr_repository_url}

    However, IAM role provide a better solution by assigning AWS policies to it. I don't need to export the aws credentials in the user_data any more.

    # Define the IAM Role
    resource "aws_iam_role" "ec2_role" {
      name = "${var.instance_name}-role"
    
      assume_role_policy = jsonencode({
        Version = "2012-10-17"
        Statement = [
          {
            Effect = "Allow"
            Principal = {
              Service = "ec2.amazonaws.com"
            }
            Action = "sts:AssumeRole"
          }
        ]
      })
    }
    
    # Attach AWS-Managed Policy for ECR Access
    resource "aws_iam_role_policy_attachment" "ecr_access" {
      role       = aws_iam_role.ec2_role.name
      policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
    }
    
    # Attach AWS-Managed Policy for CloudWatch Logs (Optional)
    resource "aws_iam_role_policy_attachment" "cloudwatch_logs" {
      role       = aws_iam_role.ec2_role.name
      policy_arn = "arn:aws:iam::aws:policy/CloudWatchLogsFullAccess"
    }
    
    # Create an Instance Profile
    resource "aws_iam_instance_profile" "ec2_instance_profile" {
      name = "${var.instance_name}-instance-profile"
      role = aws_iam_role.ec2_role.name
    }
    
    resource "aws_instance" "ec2" {
      ...
      iam_instance_profile = aws_iam_instance_profile.ec2_instance_profile.name
      ...