centos8podman

With Podman, how to turn 4 Pods into a Deployment using 4 replicas on different ports?


I currently have a website deployed using multiple pods: 1 for the client (nginx), and 4 pods for the server (node.js). But I've had to copy/paste the yaml for the server pods, name them differently and change their ports (3001, 3002, 3003, 3004).

I'm guessing this could be simplified by using kind: Deployment and replicas: 4 for the server yaml, but I don't know how to change the port numbers.

I currently use the following commands to get everything up and running:

podman play kube server1-pod.yaml 
podman play kube server2-pod.yaml 
podman play kube server3-pod.yaml 
podman play kube server4-pod.yaml 
podman play kube client-pod.yaml

Here's my existing setup on a CentOS 8 machine with Podman 3.0.2-dev:

client-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2021-07-29T00:00:00Z"
  labels:
    app: client-pod
  name: client-pod
spec:
  hostName: client
  containers:
    - name: client
      image: registry.example.com/client:1.2.3
      ports:
        - containerPort: 8080
          hostPort: 8080
      resources: {}
status: {}

server1-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2021-07-29T00:00:00Z"
  labels:
    app: server1-pod
  name: server1-pod
spec:
  hostName: server1
  containers:
    - name: server1
      image: registry.example.com/server:1.2.3
      ports:
        - containerPort: 3000
          hostPort: 3001 # server2 uses 3002 etc.
      env:
        - name: NODE_ENV
          value: production
      resources: {}
status: {}

nginx.conf

# node cluster
upstream server_nodes {
    server api.example.com:3001 fail_timeout=0;
    server api.example.com:3002 fail_timeout=0;
    server api.example.com:3003 fail_timeout=0;
    server api.example.com:3004 fail_timeout=0;
}

server {
    listen       8080;
    listen  [::]:8080;
    server_name  api.example.com;

    location / {
        root   /usr/share/nginx/html;
        index  index.html;
    }

    # REST API requests go to node.js
    location /api {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'Upgrade';
        proxy_read_timeout 300;
        proxy_request_buffering off;
        proxy_redirect off;
        proxy_buffering off;
        proxy_http_version 1.1;
        proxy_pass http://server_nodes;
        client_max_body_size 10m;
    }
}

I tried using kompose convert to turn the Pod into a Deployment, then setting replicas to 4, but since the ports are all the same, the first container is started on 3001, but the rest fail to start since 3001 is already taken.

server-deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.7.0 (HEAD)
  creationTimestamp: null
  labels:
    io.kompose.service: server
  name: server
spec:
  replicas: 4
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: server
    spec:
      containers:
        ports:
        - containerPort: 3000
          hostPort: 3001
      - env:
        - name: NODE_ENV
          value: production
        image: registry.example.com/server:1.2.3
        name: server
        resources: {}
      restartPolicy: Always
status: {}

How can I specify that each subsequent replica needs to use the next port up?


Solution

  • In the years that have followed, we had to stick with Docker for various reasons. We were eventually forced into trying Podman again and this is what is now working with Podman 4.9.4:

    podman play kube my-server.yaml --configmap=my-env.yaml
    

    my-server.yaml

    apiVersion: v1
    kind: Pod
    metadata:
      labels:
        app: my-server
      name: my-server
    spec:
      hostname: my-server
      containers:
        ######################### my-server1 #########################
        - name: server1
          image: registry.example.com/my-server:latest
          ports: # TODO remove containerPort & hostPort for production
            - containerPort: 3001
              hostPort: 3001
          env:
            - name: PORT
              value: "3001"
          envFrom:
            - configMapRef:
                name: my-env
                optional: false
        ######################### my-server2 #########################
        - name: server2
          image: registry.example.com/my-server:latest
          ports: # TODO remove containerPort & hostPort for production
            - containerPort: 3002
              hostPort: 3002
          env:
            - name: PORT
              value: "3002"
          envFrom:
            - configMapRef:
                name: my-env
                optional: false
        ######################### http-server #########################
        - name: client
          image: registry.example.com/my-client:latest
          ports:
            - containerPort: 8080
              hostPort: 8080
            - containerPort: 443
              hostPort: 8443
          volumeMounts:
            - mountPath: /etc/nginx/conf.d/default.conf
              name: nginx-conf
              readOnly: true
            - mountPath: /usr/share/nginx/html/assets
              name: assets
              readOnly: true
      ######################### volumes #########################
      volumes:
        - name: nginx-conf
          hostPath:
            path: ./nginx.conf
            type: File
        - name: assets
          hostPath:
            path: ./assets
            type: Directory
    

    my-env.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: my-env
    data:
      ########## General ##########
      NODE_ENV: "production"
    

    nginx.conf

    # [...]
    
    # node cluster
    upstream my_nodes {
        server localhost:3001 fail_timeout=0;
        server localhost:3002 fail_timeout=0;
    }
    
    server {
        # [...]
    
        location /api {
            # [...]
            proxy_pass http://my_nodes;
            # [...]
        }
    
        # [...]
    }