I need to launch a set of Docker containers on an EC2 instance using docker compose. The EC2 instances are part of an auto-scaling group, and the compose command is executed as part of the user-data
script that runs on instance start up.
As part of the user-data
script, I need to do two things:
docker-compose.yaml
file to the instance. I plan to place the file into an S3 bucket and download it. (If there is a better way, let me know.)docker-compose.yaml
, to the instance. How do I securely do this? The env variables include secrets like database credentials.I read docker compose env documentation, so the possible solutions seem to be:
docker-compose.yaml
file using the environment
attribute. Is that safe?env_file
, and put them on S3 too. That's basically the same as #1. (But allows us to maintain separate env configurations.)I am hoping there is a way to do #3 using some sort of managed secret service on AWS?
My requirements are:
Am I overthinking this? How do people normally do this? I assume this is a common problem, but I haven't been able to find a clear example that fits my requirements.
I suggest looking into ECS instead of inventing your own docker container orchestration system for EC2. However, to answer the basic question:
How do people normally do this? I assume this is a common problem, but I haven't been able to find a clear example that fits my requirements.
The normal way to provide secrets to something you are running via user-data
is to have the user-data
script call out to AWS Parameter Store, or AWS Secrets Manager to load secrets. The user-data
script would use the AWS CLI tool to make those calls. The IAM Instance Profile assigned to the EC2 instance would need to give the appropriate permissions to access the secret(s) in Parameter Store or Secretes Manager, as well as the appropriate permissions to decrypt the secrets if you are using a KMS CMK to encrypt them.