How do I configure Docker to develop two dependent Python repositories efficiently?
In development: mount framework into runtime using volumes for fast iteration (hot reload).
In production: build a clean runtime image that doesn't depend on host volumes.
dev/
├── framework/
│ └── ...
└── runtime/
├── ...
└── Dockerfile
EDIT: development speed on the dependency is limited (e.g. open source project requiring user acceptance). Unit testing the runtime via an ordinary venv with pip install -e
is an obvious choice - but it will have to wait until the dep has been released to install in any CICD pipeline.
How to run in docker locally ahead of all that - move fast, break things and iterate?
The solution I came across is to use additional contexts and docker stages. It turns out that a similar case is showcased on Docker's blog.
FROM ... AS prod
# your prod image...
FROM prod AS dev
COPY --from=myContext --chown=airflow:root ./src /opt/framework/
RUN pip install --no-deps --upgrade --force-reinstall -e /opt/framework
proven that you pass the context at build stage as below.
docker build --file ./Dockerfile --build-context myContext=../framework .
docker compose configuration would follow as such:
build:
context: .
additional_contexts:
myContext: ../framework
dockerfile: Dockerfile
target: dev
volumes:
- ../framework:/opt/framework:rw