I currently have a website deployed using multiple pods: 1 for the client (nginx), and 4 pods for the server (node.js). But I've had to copy/paste the yaml for the server pods, name them differently and change their ports (3001, 3002, 3003, 3004).
I'm guessing this could be simplified by using kind: Deployment
and replicas: 4
for the server yaml, but I don't know how to change the port numbers.
I currently use the following commands to get everything up and running:
podman play kube server1-pod.yaml
podman play kube server2-pod.yaml
podman play kube server3-pod.yaml
podman play kube server4-pod.yaml
podman play kube client-pod.yaml
Here's my existing setup on a CentOS 8 machine with Podman 3.0.2-dev:
client-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-07-29T00:00:00Z"
labels:
app: client-pod
name: client-pod
spec:
hostName: client
containers:
- name: client
image: registry.example.com/client:1.2.3
ports:
- containerPort: 8080
hostPort: 8080
resources: {}
status: {}
server1-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-07-29T00:00:00Z"
labels:
app: server1-pod
name: server1-pod
spec:
hostName: server1
containers:
- name: server1
image: registry.example.com/server:1.2.3
ports:
- containerPort: 3000
hostPort: 3001 # server2 uses 3002 etc.
env:
- name: NODE_ENV
value: production
resources: {}
status: {}
nginx.conf
# node cluster
upstream server_nodes {
server api.example.com:3001 fail_timeout=0;
server api.example.com:3002 fail_timeout=0;
server api.example.com:3003 fail_timeout=0;
server api.example.com:3004 fail_timeout=0;
}
server {
listen 8080;
listen [::]:8080;
server_name api.example.com;
location / {
root /usr/share/nginx/html;
index index.html;
}
# REST API requests go to node.js
location /api {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'Upgrade';
proxy_read_timeout 300;
proxy_request_buffering off;
proxy_redirect off;
proxy_buffering off;
proxy_http_version 1.1;
proxy_pass http://server_nodes;
client_max_body_size 10m;
}
}
I tried using kompose convert
to turn the Pod into a Deployment, then setting replicas to 4, but since the ports are all the same, the first container is started on 3001, but the rest fail to start since 3001 is already taken.
server-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.7.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: server
name: server
spec:
replicas: 4
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: server
spec:
containers:
ports:
- containerPort: 3000
hostPort: 3001
- env:
- name: NODE_ENV
value: production
image: registry.example.com/server:1.2.3
name: server
resources: {}
restartPolicy: Always
status: {}
How can I specify that each subsequent replica needs to use the next port up?
You can explore using Kubernetes Services in front of a replica set.
The service is in charge of load balancing the request between all pods with a valid selector
field. Now all your backend pods, can be using the same port as you are already doing using replicas
, and you do not need to reconfigure different port in each pod.
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
selector:
app: backend-pod # modify your replicaset with a suitable label
ports:
- port: 3000 # you can use here what ever port you like. This is the port where the service listens, the one you configure in nginx.conf later. It can be different than the the targetPort
targetPort: 3000 # request will be redirect to this pod's port.
As you access the pods via the service, you need to modify also nginx.conf
to acces the service directly. You no longer need to specify all different pods in a line. This way you also gain flexibility. If you scale up the deployment with 10 replicas por example, you do not need to include all servers here. The service does this dirty work for you.
# node cluster
upstream server_nodes {
server backend-service:3000 fail_timeout=0;
}