I try to create the following stateful set but it fails in 10 seconds while trying to start its only Pod.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dbss
spec:
selector:
matchLabels:
app: db-pod
replicas: 1
template:
metadata:
labels:
app: db-pod
spec:
containers:
- name: db-cont
image: mysql:5
ports:
- containerPort: 3306
The error message is not descriptive enough in my opinion:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m9s default-scheduler Successfully assigned dbss-0 to minikube
Normal Pulled 3m16s (x5 over 5m9s) kubelet Container image "mysql:5" already present on machine
Normal Created 3m16s (x5 over 5m9s) kubelet Created container db-cont
Normal Started 3m16s (x5 over 5m9s) kubelet Started container db-cont
Warning BackOff 2s (x21 over 4m55s) kubelet Back-off restarting failed container db-cont in pod dbss-0_k8s-overview(d7c03bf1-051e-47f8-bb72-6b1669584011)
Is it because there is no Persistent Volume mounted to the Stateful Set? Is there a way to see a more descriptive error message to see the actual reason?
I know of the following limitation from https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations:
Limitations The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested storage class, or pre-provisioned by an admin.
Yet I am not sure I understand the quoted limitation correctly. Especially after looking at the events of the failing pods.
It is possible to create a StatefulSet without a persistent Volume
apiVersion: v1
kind: Service
metadata:
name: nginx-hellow-world
labels:
app: nginx-hello-world
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx-hello-world
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-hello-world
spec:
selector:
matchLabels:
app: nginx-hello-world # has to match .spec.template.metadata.labels
serviceName: "nginx-hello-world"
template:
metadata:
labels:
app: nginx-hello-world # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx-hello-world
image: dockerbogo/docker-nginx-hello-world
ports:
- containerPort: 80
name: web
in the example on the kubernetes reference page the nginx image they use expects a volume whith the nginx views. That is why in the example they mount a volume.
The terminal responses you provided you're pods are already scheduled but a container in the pod fails to start and kubernetes tries to restart them that is what Back-off restarting failed container
means, see this documentation.