I have a deployment with 2 containers running in the same Pod in my Kubernetes cluster. One is NGINX and the other is a sidecar (bash image). Following is the definition file :-
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
run: nginx
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
volumes:
- name: logs
emptyDir: {}
containers:
- image: nginx
name: nginx
resources: {}
volumeMounts:
- name: logs
mountPath: /var/log/nginx
- image: bash
name: sidecar
command: ["/bin/sh","-c","tail -f /var/log/sidecar/access.log"]
volumeMounts:
- name: logs
mountPath: /var/log/sidecar
I understand that the Docker image (NGINX) generates the standard logs to the location /var/log/nginx/access.log
(ref-> https://docs.docker.com/config/containers/logging/)
I'm using emptyDir volume which means all the containers will share the same storage location within the pod on the host machine.
When I do curl on the service or Nginx pod, I get successful response from the NGINX server, and its logs can be seen using the kubectl logs <pod name> sidecar -f
command on the console.
What I do not understand completely is that the file path in the command (inside bash/sidecar container) is /var/log/sidecar/access.log and not /var/log/nginx/access.log then how come I'm able to get those logs because the file access.log does not exists inside the directory /var/log/sidecar.
Please advise, thanks
Before getting into your question, you will need to know how Kubernetes Pod works.
In Kubernetes, All the containers in the same Pod are scheduled in the same node. Inside the node, they are orchestrated with the container runtime.
Taking the docker runtime for simplicity, Kubernetes will spawn up two containers according to your configuration. One is the NGINX and one is the bash. In container orchestration, you can define a volume and share them across different container runtime. But behind the scene, they are using the same folders/files from the host machine.
So, circling back to your question, Kubernetes here created two containers in your hosting machine. And it also created a docker volume to share across these two containers. Inside the containers, the volume are mounted into different path but still share the same source of truth, which comes from the worker machine.
In case you want to know how it is going on your worker machine, you can ssh into your worker and run the following command.
# Get your container ID
> docker ps
# Inspect your container
> docker inspect <container>
And try to spot the volume part in the output. You should be able to see that they are sharing the volume.