I am trying to connect local Ollama 2 model, that uses port 11434 on my local machine, with my Docker container running Linux Ubuntu 22.04. I can confirm that Ollama model definitely works and is accessible through http://localhost:11434/. In my Docker container, I am also running GmailCTL service and was able to successfully connect with Google / Gmail API to read and send emails from Google account. Now I want to wait for an email and let the LLM answer the email back to the sender. However, I am not able to publish the 11434 port in order to connect model with container.
I tried setting up devcontainer.json file to forward the ports:
{ "name": "therapyGary", "build": { "context": "..", "dockerfile": "../Dockerfile" }, "forwardPorts": [80, 8000, 8080, 11434] }
I tried exposing the ports in the Dockerfile:
EXPOSE 80 EXPOSE 8000 EXPOSE 8080 EXPOSE 11434`
These seem to add the ports to the container and Docker is aware of them, but when I check the port status for the currently used image, I get this message: "Error: No public port '11434' published for 5ae41009199a"
I also tried setting up the docker-compose.yaml file:
services: my_service: image: 53794c7c792c # Replace with your actual Docker image name ports: - "11434:11434" - "8000:8000" - "8080:8080" - "80:80"
But there seems to be a problem with it, where any container with it automatically stops.
I tried stopping the Ollama model, before running the container as to not create a conflict, but that did not help either. Any suggestions are very welcome.
Thanks!
-- edit -- adding Dockerfile code: FROM ubuntu:22.04
ENV DEBIAN_FRONTEND=noninteractive ENV GMAILCTL_VERSION=0.10.1
RUN apt-get update && apt-get install -y
python3
python3-pip
xdotool
curl
software-properties-common
libreoffice
unzip
&& apt-get clean
RUN pip3 install --upgrade pip RUN pip3 install google-api-python-client google-auth-httplib2 google-auth-oauthlib pandas requests
RUN useradd -ms /bin/bash devuser
RUN mkdir -p /workspace && chown -R devuser:devuser /workspace
USER root
WORKDIR /workspace
COPY . .
RUN chown -R devuser:devuser /workspace
EXPOSE 80 EXPOSE 8000 EXPOSE 8080 EXPOSE 11434
CMD [ "bash" ]
So remove the EXPOSE 11434
statement, what that does is let you connect to a service in the docker container using that port. 11434
is running on your host machine, not your docker container.
To let the docker container see port 11434
on your host machine, you need use the host
network driver, so it can see anything on your local network. To do this, you can use the runArgs
parameter:
{ "name": "therapyGary",
"build":
{ "context": "..",
"dockerfile": "../Dockerfile"
},
"forwardPorts": [80, 8000, 8080, 11434]
}
would become
{ "name": "therapyGary",
"build":
{ "context": "..",
"dockerfile": "../Dockerfile"
},
"runArgs": ["--net=host"]
}
Then, from within your container, you should be able to contact the LLM on port 11434
by referencing localhost
or 127.0.0.1
from your container. E.g. in netcat nc localhost 11434
. If you're using Docker Desktop, you need to enable host networking by going into Features in development
tab in Settings
and select the Enable host networking
option, per the documentation here: Docker Desktop
As a side note, you can use --net=host
or --network=host
, both work on my machine using Windows 11 and Docker Desktop.
If you want to use a docker compose yaml file, you would use the network_mode
parameter:
services:
my_service:
image: 53794c7c792c
# Replace with your actual Docker image name
network_mode: "host"
Because you're putting the container on the host network, there is no need to expose ports, since it's like plugging your container directly into your network. See the Note in the documentation.
References: