dockerollamamistral-7b

How to build in Mistral model into Ollama permanently?


I would like to create a Dockerfile, in which I would run Ollama with built in Mistral model inside. For now on, I achieved only this: when I run Ollama, it downloads Mistral in one single Dockerfile (firsty I used Docker compose, but finally managed to use one Dockerfile).

I'm wondering, if its possible to built in Mistral model into Ollama image permanently? Here's my current solution:

entrypint.sh

#!/bin/sh
/bin/ollama serve &

# Wait for the server to start
sleep 5

# Execute the curl command
curl -X POST -d '{"name": "mistral"}' http://127.0.0.1:11434/api/pull

# Wait indefinitely to keep the container running
tail -f /dev/null

and ollama image (Dockerfile):

# Use a base image for the application service
FROM ollama/ollama:0.1.37

# Expose port 11434 (assuming the application listens on this port)
EXPOSE 11434

# Define a volume for storing Ollama data
VOLUME /root/.ollama

# Install curl (assuming it's not already installed in the base image)
RUN apt-get update && apt-get install -y curl
 
# Define volumes
VOLUME ollama_data

# Copy the entrypoint script into the image
COPY entrypoint.sh /usr/local/bin/entrypoint.sh

# Make the script executable
RUN chmod +x /usr/local/bin/entrypoint.sh

# Set the entrypoint to the script
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]

and it works great but it downloads the Mistral model. I would like to download it only once, during image building. Is it possible?


Solution

  • If that API call retrieves the data and stores it locally, then it should be possible to run this in the Dockerfile. A RUN command doesn't persist running processes, so it should work to start a temporary server, run the curl command, and let the ordinary container shutdown sequence clean up the process. This all needs to happen within a single RUN command, like

    RUN ollama serve & \
        curl --retry 10 --retry-connrefused --retry-delay 1 http://localhost:11434/ && \
        curl -X POST -d '{"name": "mistral"}' http://localhost:11434/api/pull
    

    In terms of Dockerfile commands, this needs to be after you RUN apt-get install curl, and before any VOLUME directive that affects the data directory. If the base image Dockerfile itself declares a VOLUME then this may not be possible (for this image in particular it doesn't seem to). (You may not need or want a VOLUME directive at all.)

    Once this is in the Dockerfile, you can get rid of the custom entrypoint script and just set the main image's CMD to run the server. The base image already runs ollama serve as its default command, so you can remove the ENTRYPOINT and CMD line entirely. The EXPOSE line is in the base image too. You might be able to reduce the Dockerfile to just

    FROM ollama:0.1.48
    RUN apt-get update && \
        DEBIAN_FRONTEND=noninteractive \
        apt-get install --no-install-recommends --assume-yes \
          curl
    RUN ollama serve & \
        curl --retry 10 --retry-connrefused --retry-delay 1 http://localhost:11434/ && \
        curl -X POST -d '{"name": "mistral"}' http://localhost:11434/api/pull