dockerdocker-compose

How can I connect Docker Containers without LocalHost?


I currently have a network of containers running in Docker, a relational DB, a graphdb, a React App and a FastAPI backend. I've managed to connect the databases to the backend without much trouble, but for some reason I can't get the frontend connected to the backend within the network. When I try connecting via container name (api:8000/...) I get ERR_NAME_NOT_RESOLVED. However I am able to access and connect if I try to do so via it's mapped localhost port (localhost:8001/...).

As an extra piece of data I can open a shell within the frontend container and perform curl api:8000/ and receive the expected "Hello World" I have at the default route.

DOCKER COMPOSE FILE

services:
  api:
    image: backendtest:1.0
    depends_on:
      graphdb:
        condition: service_healthy
      mysql:
        condition: service_started
    environment:
      - ONGDB_USER=*****
      - ONGDB_PASS=*****
      - ONGDB_URI=bolt://graphdb:7687
      - MYSQL_USER=root
      - MYSQL_PASS=********
      - MYSQL_HOST=mysql
      - MYSQL_DB=some_db
    ports:
      - 8001:8000
  graphdb:
    image: graphfoundation/ongdb:1.0
    ports:
      - 4747:7474
      - 7687:7687
      - 7473:7473
    healthcheck:
      test: ["CMD", "geequel-shell", "-u", "*****", "-p", "*****", "RETURN 1;"]
      interval: 5s
      timeout: 30s
      retries: 5
  mysql:
    image: mysql:8.4
    ports:
      - 13306:3306
    environment:
      - MYSQL_ROOT_PASSWORD=********
  app:
    image: frontimage:1.0
    depends_on:
      - api
    ports:
      - 5173:5173
    environment:
      - VITE_API_URL=http://api:8000

BACKEND DOCKERFILE

FROM python:3.12
WORKDIR /code

COPY requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir -r /code/requirements.txt

COPY . /code/apyi

EXPOSE 8000

CMD ["fastapi", "dev", "apyi/server/server.py", "--host", "0.0.0.0", "--port", "8000"]

FRONTEND DOCKERFILE

FROM node:22
WORKDIR /frontend

COPY package.json /frontend/package.json
COPY package-lock.json /frontend/package-lock.json
RUN npm install

COPY . /frontend

EXPOSE 5173

CMD ["npm", "run", "dev"]

I can use localhost for now since I'm still figuring things out, but moving towards actually hosting in containers I would like to make sure this works without depending on using localhost rather than the network address.


Solution

  • The docker bridge network connects the containers and lets them talk to each other using the compose service names as host names. But that only works from container to container.

    When your frontend runs, it runs in a browser, potentially on the other side of the planet.

    So it's not part of the bridge network and has to access the host machine. The host can then pass the traffic on to a container through a mapped port.

    Even when you run your frontend app on the host machine, it's running in a browser on the host. That's also outside the docker container, so it can't use the service name but has to use localhost.