I'm using AWS ECS fargate. I need to use volumes to store some files on container. But Fargate is serverless then there is no storage. I found EFS is solution to using volume for fargate.
I faced github workflow not copy files to EFS... The github runner has files well, but below codes not work. I expected below codes copy files to EFS or fargate container.
github workflow, it runs on runner. I searched this way from chat GPT.
- name: Copy source code to EFS
run: |
aws --version
sudo mkdir /flask
sudo mkdir /nginx
sudo mkdir /nginx/log
# Copy source code files to EFS (assuming EFS is mounted at /efs_mount_point)
sudo cp -R ./flask/* /flask
sudo cp -R ./nginx/* /var/log/nginx
sudo cp -R ./ /
ls -Rl /flask
ls -Rl /nginx
ls -Rl /nginx/log
My scenario:
I'm confusing the flows of copy files.
Container is running but that can not find files so not found error raised then terminate.. so I can not enter terminal of fargate container.
I tried enter EC2 terminal that mounted EFS and checked EFS dir. There are only directories those I set path on access points not files...
EFS access point
task definition
Please let me know how to copy source code files to EFS, I need some files that read by container.
Updated
- name: Build, tag, and push image to Amazon ECR
id: build-image
run: |
export DOCKER_BUILDKIT=1
docker-compose -f docker-compose-development.yaml build
docker push $ECR_REGISTRY/$ECR_FLASK_REPOSITORY:$IMAGE_TAG
docker push $ECR_REGISTRY/$ECR_NGINX_REPOSITORY:$IMAGE_TAG
echo "::set-output name=flask_image::$ECR_REGISTRY/$ECR_FLASK_REPOSITORY:$IMAGE_TAG"
echo "::set-output name=nginx_image::$ECR_REGISTRY/$ECR_NGINX_REPOSITORY:$IMAGE_TAG"
- name: Fill in the new image ID in the Amazon ECS task definition
id: flask-task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: fargate-task.json
container-name: flask
image: ${{ steps.build-image.outputs.flask_image }}
- name: Fill in the new image ID in the Amazon ECS task definition
id: nginx-task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: ${{ steps.flask-task-def.outputs.task-definition }}
container-name: nginx
image: ${{ steps.build-image.outputs.nginx_image }}
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition@v1
with:
task-definition: ${{ steps.flask-task-def.outputs.task-definition }}
service: service
cluster: cluster
codedeploy-deployment-group: g
codedeploy-appspec: appspec.yaml
wait-for-service-stability: true
appspec.yaml
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "arn:aws:ecs:ap-northeast-2:xx:task-definition/fargate-task:35"
LoadBalancerInfo:
ContainerName: "nginx"
ContainerPort: 80
PlatformVersion: "LATEST"
That GitHub Action code in your question is just copying files around on the GitHub runner server. It isn't connected to EFS at all.
I can think of two options for accessing EFS from a GitHub runner:
Enable AWS Transfer Family for SFTP access to your EFS volume, then use an SFTP action in GitHub. However this is going to cost over $200 a month just to have SFTP enabled.
Create an EC2 instance with the EFS volume mapped to it, and configure GitHub to use that EC2 instance as your GitHub runner, and expose the EFS volume mapping to the
The way you are copying code from your GitHub repository to EFS volumes that your containers will later use is not a typical setup. Typically you would build a new Docker image in your GitHub Action that includes that updated source code, and push the image to your AWS ECR repository. Then you would trigger a redeploy of your ECS/Fargate service.