My docker container runs a python app (backend API) that allows users upload various documents, PDF's mostly. so i think due to the pdf/file(s) upload, the container keeps creating core dump files: shown below screenshot_coredump
This slows down the container an eventually the container would crash! I have used a similar question asked on stack overflow (how-to-disable-core-file-dumps-in-docker-container)but the solution seems to work on docker run on a local pc. how can i fix this on a production environment.
container runs ubuntu:22.04
i use a dockerfile:
below is my docker file config:
FROM python:3.9
RUN mkdir /code
WORKDIR /code
COPY requirements.txt .
RUN pip install -r requirements.txt
# Download the pandoc deb file
RUN apt-get update && apt-get install -y wget
RUN wget https://github.com/jgm/pandoc/releases/download/3.1.2/pandoc-3.1.2-1-amd64.deb
# Install the downloaded deb file
RUN dpkg -i pandoc-3.1.2-1-amd64.deb
COPY . .
CMD ["gunicorn", "-w", "17", "-k", "uvicorn.workers.UvicornWorker", "--timeout", "120", "main:app", "-b", "0.0.0.0:80"]
i also use a task definition to deploy my container:
{
"taskDefinitionArn": "arn:aws:ecs:us-west-2:$ARN:task-definition/a$task-def:30",
"containerDefinitions": [
{
"name": "$NAME",
"image": "$ARN.dkr.ecr.us-west-2.amazonaws.com/$IMAGE-NAME:22636912fe7ab73cf3bd23bdb3d88d317d00b272",
"cpu": 0,
"portMappings": [
{
"name": "$CONTAINER_NAME-80-tcp",
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp",
"appProtocol": "http"
}
],
"essential": true,
"environment": [],
"environmentFiles": [
{
"value": "arn:aws:s3:::$S3_Resource",
"type": "s3"
}
],
"mountPoints": [],
"volumesFrom": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-group": "/ecs/$LOG_Group",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "ecs"
}
}
}
],
"family": "$LOG_FAMILY",
"taskRoleArn": "arn:aws:iam::$ARN:role/ecsTaskExecutionRole",
"executionRoleArn": "arn:aws:iam::$ARN:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"revision": 30,
"volumes": [
{
"name": "new",
"host": {}
}
],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "ecs.capability.env-files.s3"
},
{
"name": "ecs.capability.increased-task-cpu-limit"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"name": "ecs.capability.extensible-ephemeral-storage"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "8192",
"memory": "24576",
"ephemeralStorage": {
"sizeInGiB": 200
},
"runtimePlatform": {
"cpuArchitecture": "X86_64",
"operatingSystemFamily": "LINUX"
},
"registeredAt": "2023-07-30T20:49:22.769Z",
"registeredBy": "arn:aws:sts::$ARN:assumed-role/github/github",
"tags": []
}
github actions script:
name: Deploy Document-Management-service To Amazon ECS
on:
push:
branches:
- "main"
env:
AWS_REGION: # set this to preferred AWS region, e.g. us-west-1
ECR_REPOSITORY: # set this to your Amazon ECR repository name
ECS_SERVICE: # set this to your Amazon ECS service name
ECS_CLUSTER: # set this to your Amazon ECS cluster name
ECS_TASK_DEFINITION: .github/workflows/main-task-definition.json # set this to the path to your Amazon ECS task definition # file, e.g. .aws/task-definition.json
CONTAINER_NAME: # set this to the name of the container in the
# containerDefinitions section of your task definition
permissions:
id-token: write
contents: read # This is required for actions/checkout@v2
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
environment: production
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
role-to-assume: ${{ secrets.AWS_ARN }} #AWS ARN With IAM Role
role-session-name: github
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Build, Push, Tag and Deploy Container to ECR.
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG --ulimit core=0 .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
- name: Fill in the new image ID in the Amazon ECS task definition
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: ${{ env.ECS_TASK_DEFINITION }}
container-name: ${{ env.CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition@v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
i tried adding:
--ulimit core=0
to my github action script it looked like this
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG --ulimit core=0 .
but apparently I realized it was said to be used with a docker run
command
so is there any way to disable core file dump on a production environment??
You can disable core dumps in Linux by setting ulimit core
hard and soft values to 0
.
This can be done via the ulimit
setting in the ECS container definition.