postgresqldockernestjsprismamigrate

Prisma migration in a Docker - NestJS server


I am implementing a application using a NestJS server, working with a PostgreSQL database and the Prisma service to handle data. I have an issue when trying to run a Prisma migration when launching my service in my Dockerfile. Here is my docker-compose.yml file :

version: '3.8'

services:
  # POSTGRES
  postgres:
    container_name: postgres
    image: postgres:13.5
    restart: always
    ports:
      - 5432:5432
    env_file:
      - ./backend/.env
    volumes:
      - postgres:/var/lib/postgresql/data
    networks:
      - transcendance

  # BACKEND
  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
      args:
        - BUILDKIT_INLINE_CACHE=1
    container_name: backend
    restart: always
    env_file:
      - ./backend/.env
    ports:
      - 3001:3001
      - 5555:5555 # Expose a port for Prisma Studio
    depends_on:
      - postgres
    networks:
      - transcendance
    volumes:
      - ./backend:/app

networks:
  transcendance:

volumes:
  postgres:

I am building and running my backend container with this Dockerfile :

FROM node:lts

WORKDIR /app

COPY package*.json ./
COPY prisma ./prisma/
COPY entrypoint.sh /app/entrypoint.sh
COPY . .

RUN npm i -g @nestjs/cli
RUN npm install

RUN chmod +x /app/entrypoint.sh

EXPOSE 3001 3002 5555
ENTRYPOINT [ "/app/entrypoint.sh" ]
CMD [ "npm", "run", "start:dev" ] 

I have set up this entrypoint.sh file as follow in order to make this migration

#!/bin/sh

# Apply Prisma migrations and start the application
npx prisma migrate deploy
npx prisma generate

# Run database migrations
npx prisma migrate dev --name init 

# Run the main container command
exec "$@"

All my containers are well created but only my database is running, others have CREATED status. If i remove my script and its execution, run the docker compose up --build and then make the migration directly from my backend container, everything work well.

Any help on this ? Thank you !


Solution

  • In your Compose file, you have a volumes: block that overwrites the image's code with content from the host. Delete this.

    services:
      backend:
        volumes:            # <-- delete
          - ./backend:/app  # <-- delete
    

    When this block is present, the /app directory in the container is the ./backend directory in the host system. Whatever was in the image under /app is hidden, and replaced by that mounted content.

    More specifically, when your Dockerfile says

    RUN chmod +x /app/entrypoint.sh
    

    that permission change is hidden by the bind mount. If the file isn't executable on the host system, then it won't be executable when the container runs either, and you'll get an error running it as the container's entrypoint.

    Mounts like this also hide the image's node_modules directory. If your host system isn't fully compatible with the container environment (same operating system and C library base) then this can cause problems starting up. There's a common workaround to take advantage of a Docker feature to store the node_modules tree in an anonymous volume, but this means that the container environment will ignore changes in the package.json file, and you can get different library trees depending on when you first ran the container.