pythondockerdocker-composepython-venvsigterm

Python Docker SIGTERM not being fired on container stop


When I run it inside docker file and call

docker container stop <container-name>

It does not terminate gracefully

Dockerfile

FROM python:3.10.11-slim as base

ENV PYTHONFAULTHANDLER=1 \
    PYTHONHASHSEED=random \
    # Turns off buffering for easier container logging
    PYTHONUNBUFFERED=1

RUN apt-get update \
    && apt-get install --no-install-recommends -y gcc libffi-dev g++ \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

FROM base as builder

ENV PIP_DEFAULT_TIMEOUT=100 \
    PIP_DISABLE_PIP_VERSION_CHECK=1 \
    PIP_NO_CACHE_DIR=1 \
    POETRY_VERSION=1.5.1

RUN pip install "poetry==$POETRY_VERSION"
RUN python -m venv /venv

WORKDIR /home/ch_news

COPY --chown=10000:10000 pyproject.toml poetry.lock ./
# --no-interaction not to ask any interactive questions
# --no-ansi flag to make your output more log friendly
RUN . /venv/bin/activate && poetry install --no-interaction --no-dev --no-ansi

COPY --chown=10000:10000 . .
RUN . /venv/bin/activate && poetry build

FROM base as final

WORKDIR /home/ch_news

RUN groupadd --gid 10000 ch_news \
    && useradd --uid 10000 --gid ch_news --shell /bin/bash --create-home ch_news

COPY --from=builder --chown=10000:10000 /venv /venv
COPY --from=builder --chown=10000:10000 /home/ch_news/dist .
COPY  --chown=10000:10000 ./docker/production/python_server/docker-entrypoint.sh ./
RUN chmod +x docker-entrypoint.sh

RUN . /venv/bin/activate && pip install *.whl

USER ch_news
CMD ["./docker-entrypoint.sh"]

docker-entrypoint.sh

#!/bin/sh

set -e
. /venv/bin/activate

python -m news

I am running everything inside docker-compose if that helps

version: '3.9' # optional since v1.27.0
name: ch_news_prod
services:
  ch_news_pro_python:
    build:
      context: ../../
      dockerfile: ./docker/production/python_server/Dockerfile
    container_name: ch_news_pro_python
    depends_on:
      ch_news_pro_postgres:
        condition: service_healthy
    env_file:
      - .env
    image: ch_news_pro_python_image
    networks:
      - network
    restart: 'always'

  ch_news_pro_postgres:
    build:
      context: ../..
      dockerfile: ./docker/production/postgres_server/Dockerfile
    container_name: ch_news_pro_postgres
    env_file:
      - .env
    healthcheck:
      test:
        [
          'CMD-SHELL',
          "pg_isready -d 'host=ch_news_pro_postgres user=ch_api_user port=47289 dbname=ch_api_db_pro'",
        ]
      interval: 5s
      timeout: 5s
      retries: 3
      start_period: 10s
    image: ch_news_pro_postgres_image
    networks:
      - network
    ports:
      - '47289:47289'
    restart: 'always'
    volumes:
      - postgres_data:/var/lib/postgresql/data

networks:
  network:
    driver: bridge

volumes:
  postgres_data:
    driver: local

This is what my script roughly looks like

import asyncio
import signal

def handle_sigterm(signum: int, frame: Any) -> None:
    raise KeyboardInterrupt()


def app() -> None:
    try:
        asyncio.run(periodic_task())
    except KeyboardInterrupt:
        logger.info("Shutting down because you exited manually...")

signal.signal(signal.SIGTERM, handle_sigterm)
signal.signal(signal.SIGINT, handle_sigterm)

Can someone tell me where i am going wrong and how to make the SIGTERM handler fire the Keyboard Interrupt when running via docker-compose?


Solution

  • Your docker-entrypoint.sh script runs the main application as an ordinary subprocess. This means the shell script will stay running while your Python application runs. If you docker-compose exec ch_news_pro_python ps, I expect you will see that process 1 is the docker-entrypoint.sh script and not the Python application.

    docker stop sends its signal only to process 1 in the container. That means the shell script receives SIGTERM, and it doesn't forward it on to the application.

    The easiest way to work around this is to change the last line of the script to exec the Python application. This causes the shell script to replace itself with the Python interpreter; you can't run anything in the script after the exec command, but conversely, now python will be process 1 and will receive Docker signals.

    exec python -m news
    #^^^
    

    Alternatively, you can usually run commands directly out of a virtual environment without specifically activating it first. The simplest Docker setup might be to use the Poetry scripts setting to create a script in /venv/bin/news that runs your application; have poetry install install the entire application into the virtual environment; put the /venv/bin directory in the $PATH; and then set the image CMD to run the wrapper script.

    FROM base as final
    COPY --from=builder /venv /venv
    ENV PATH /venv/bin:$PATH
    CMD ["news"]