In order to keep final docker image small, my usual approach to building python projects with binary dependencies is to build the pinned dependencies in a first stage and copy them to a final stage lacking the building toolchains. Broadly:
FROM python:3 as builder
RUN apt-get install -y libfoo-dev libbar-dev
COPY constraints.txt /
RUN pip wheel \
--constraint /constraints.txt \
--wheel-dir /wheels \
python-foo pyBar
FROM python:3-slim
RUN apt-get install -y libfoo libbar
COPY requirements.txt constraints.txt /
COPY --from=builder /wheels /wheels
RUN pip install \
--requirement /requirements.txt \
--constraint /constraints.txt \
--only-binary :all: \
--find-links /wheels
Now I am trying to something similar on a project managed with pipenv and I am quite astray on how to achieve the same effect: pre-building the few projects that lack a public wheel in a first stage for the version pinned in the lockfile, and use them in a later pipenv install --deploy
in the final stage.
Does this even make sense with the hash checking pipenv does? Is there any alternative to reduce the final image size? I'd like to avoid using a private index where to store prebuilt wheels, I'd rather keep the solution contained in the Dockerfile.
Related question How to make lightweight docker image for python app with pipenv
A solution is to install a full virtualenv and copy it, not only some wheels.
FROM python:3 as builder
RUN apt-get install -y libfoo-dev libbar-dev
RUN pip install pipenv
WORKDIR /app
COPY Pipfile* /app
RUN mkdir /app/.venv
RUN pipenv install --deploy
FROM python:3-slim
RUN apt-get install -y libfoo libbar
WORKDIR /app
COPY --from=builder /app/.venv /app/.venv
ENV PATH=/app/.venv/bin:$PATH