When I execute docker compose up -d
I want to be able to run my Flask app on localhost 5123 and a Jupyter server on localhost 8123 in the web browser. I want to use the Jupyter server to test some code that I will eventually use in my Flask app. The Jupyter notebooks should have access to the same packages that I installed in my Flask app (e.g. specific versions of numpy, pandas, etc.)
How to achieve this?
What I tried so far:
jupyter/minimal-notebook
image. It works (sort of), but packages from my base image need to be reinstalled in every jupyter notebook? I included the Dockerfile and compose.yml file below.jupyter==1.0.0
and 10 more jupyter and jupyterlab packagesThe Dockerfile for my base image (Flask+TailwindCSS+DaisyUI, last two installed via npm):
# Stage 1: Build Python dependencies
FROM python:3.11-slim AS builder
RUN apt-get update && apt-get install -y gcc
WORKDIR /app
COPY requirements.txt ./
# RUN pip install -r requirements.txt
RUN pip install --upgrade pip && pip install -r requirements.txt
# # Check if Flask is installed
# RUN python -c "import flask; print('Flask installed successfully')"
# Stage 2: Build Node.js dependencies
FROM node:18-alpine AS node-builder
WORKDIR /app
COPY package.json ./
RUN npm install
# Stage 3: Final image with both dependencies
FROM python:3.11-slim
WORKDIR /app
COPY --from=builder /app /app
COPY --from=builder /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
COPY --from=node-builder /app/node_modules /app/node_modules
# Expose the port
EXPOSE 5000
And my compose.yml file:
services:
web:
build: .
command: python app.py runserver 0.0.0.0:5000
volumes:
- .:/app
ports:
- "5123:5000"
jupyter:
image: jupyter/minimal-notebook
environment:
- JUPYTER_TOKEN=iambatman
volumes:
- ./:/home/jovyan/
- ../python_env:/usr/local/share/jupyter/kernels/python3
ports:
- "8123:8888"
command: jupyter lab --ip=0.0.0.0 --port=8888
I am not an expert in Docker, any help is highly appreciated!
You can't share files between containers. This is doubly true with things like libraries that are part of your application's code base, and may be tied to a specific Python installation or setup. (Bind-mounting a virtual environment from the host system definitely won't work, for example.)
However, you can create a Dockerfile FROM
any image you want. It can help to find and read through the base image's Dockerfile; the Jupyter images are here. Finding the corresponding documentation is also useful; the Jupyter Docker Stacks documentation notes
The
jovyan
user has full read/write access to the/opt/conda
directory. You can use eithermamba
,pip
, orconda
(mamba
is recommended) to install new packages without any additional permissions.
Following its example, you could build a custom image like
FROM quay.io/jupyter/minimal-notebook
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}" && \
rm requirements.txt
(That's the entire Dockerfile, common metadata like USER
and CMD
is inherited from the base image.)
This won't "share libraries" per se with the main application image, but since it uses the same Pip dependency lock file, you should get the same libraries in both images.
In the Compose file, you need to tell Compose to build this image and not just use the upstream image. Again, you do not need volumes to inject or overwrite these libraries because they're included in the image.
jupyter:
build:
context: .
dockerfile: Dockerfile.jupyter
environment:
- JUPYTER_TOKEN=iambatman
volumes:
- ./:/home/jovyan/
ports:
- "8123:8888"
# no image:, shouldn't need to override command: