postgresqldockerkubernetesflaskdeployment

Flask Postgres Kubernetes No such file


I have a Flask + Postgres application, containerized with docker-compose.

 services:
  database:
    image: postgres:latest
    container_name: hse-database
    volumes:
      - postgres_data:/var/lib/postgresql/data/
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_DB=hse-app-db
    expose:
      - "5432"
    ports:
      - "5432:5432"
     restart: always
  api:
    build: ./hse_api
    container_name: hse_api
    ports:
      - "5000:5000"
    env_file: ./hse_api/.env
    volumes:
      - ./hse_api:/usr/src/app/api
    depends_on:
      - database
    restart: always

I have an entrypoint file that is meant to seed the database using functions made in my manage.py file and seems to work fine so far when just using Docker.

When I also launch Kubernetes pods for the project, postgres seems to launch fine as seen below:

NAME                                READY   STATUS             RESTARTS       AGE
pod/hse-api-5c7f656b8c-d59v8        0/1     CrashLoopBackOff   12 (56s ago)   37m
pod/postgres-7d4b444649-nnc9n       1/1     Running            0              37m

NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/hse-api        ClusterIP      10.103.119.87   <none>        5000/TCP         37m
service/hse-database   ClusterIP      10.108.4.231    <none>        5432/TCP         37m    
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hse-api        0/1     1            0           37m
deployment.apps/postgres       1/1     1            1           37m

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/hse-api-5c7f656b8c        1         1         0       37m
replicaset.apps/postgres-7d4b444649       1         1         1       37m

Api pod on the other hand throws a CrashLoopBackOff error because entrypoint.sh cannot be found.

Port:           5000/TCP
Host Port:      0/TCP
State:          Waiting
  Reason:       CrashLoopBackOff
Last State:     Terminated
  Reason:       ContainerCannotRun
  Message:      failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/usr/src/app/api/entrypoint.sh": stat /usr/src/app/api/entrypoint.sh: no such file or directory: unknown

However, when I try to run a container from my docker-hub image:

 docker run -it --rm imranoshpro/hse-api:v0.2.1 /bin/sh

It appears to succesfully locate the file, but the error this time being that it cannot locate the database server.

What could be the cause of the entrypoint file failing to be picked up?

Deployment yaml below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hse-api
  namespace: hse-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hse-api
  template:
    metadata:
      labels:
        app: hse-api
    spec:
      containers:
      - name: hse-api
        image: imranoshpro/hse-api:v0.2.1
        imagePullPolicy: Always
        ports:
        - containerPort: 5000
        envFrom:
        - secretRef:
            name: hse-api-secrets
        volumeMounts:
        - name: api-volume
          mountPath: /usr/src/app/api
      volumes:
      - name: api-volume
        persistentVolumeClaim:
          claimName: api-pvc
      imagePullSecrets:
      - name: docker-hub-secret

And PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: api-pvc
  namespace: hse-app
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

Solution

  • In your Kubernetes setup, you're creating a PersistentVolumeClaim, and then mounting it over /usr/src/app/api in your Pod spec. The created PersistentVolume starts off as empty until something explicitly writes to it, so this has the effect of hiding all of the code in your image.

    You should delete the entire PersistentVolumeClaim, plus the parts that mount it into the container

    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: api-pvc  # <-- delete this entire object
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: hse-api
    spec:
      template:
        spec:
          containers:
          - name: hse-api
            volumeMounts:  # <-- delete this entire section
            - name: api-volume
              mountPath: /usr/src/app/api
          volumes:         # <-- delete this entire section
          - name: api-volume
            persistentVolumeClaim:
              claimName: api-pvc
    

    Similarly, in the Compose file, you should delete the volumes: block for the api container

    services:
      api:
        volumes:  # <-- delete this entire section
          - ./hse_api:/usr/src/app/api
    

    In plain Docker or with Compose, this replaces the code in the image with what's on your host system. Kubernetes, as a usually-remote clustered environment, can't see your host system. Furthermore, since you're replacing the image's code with something else, in this Compose setup you're never actually running what's in your image, so you're very much prone to the "works on my machine" sorts of problems that Docker setups generally try to avoid.