I'm starting out in K8s and I'm not quite wrapping my head around deploying a StatefulSet with multiple replicas bound to a local disc using PV
+PVC
+SC
vs. volumeClaimTemplates
+ HostPath
scenarios.
My goal is to deploy a MongoDB StatefulSet with 3 replicas set in (mongo's replica set) ReplicaSet mode and bound each one to a local ssd.
I did a few tests and got a few concepts to get straight.
Scenario a) using PV
+PVC
+SC
:
If in my StatefulSet
's container (set with replicas:1) I declare a volumeMounts
and a Volume
I can point it to a PVC which uses a SC used by a PV which points to a physical local ssd folder.
The concept is straight, it all maps beautifully.
If I increase the replicas to be more the one then from the second pod onward they'll won't find a Volume to bind to..and I get the 1 node(s) didn't find available persistent volumes to bind
error.
This makes me realise that the storage capacity reserved from the PVC on that PV is not replicated as the pods in the StatefulSet and mapped to each created POD.
Scenario b) volumeClaimTemplates
+ HostPath
:
I commented out the Volume, and instead used the volumeClaimTemplates
which indeed works as I was expecting in scenario a, for each created pod an associated claim gets created and some storage capacity gets reserved for that Pod. Here also pretty straight concept, but it only works as long as I use storageClassName: hostpath
in volumeClaimTemplates
. I tried using my SC and the result is the same 1 node(s) didn't find available persistent volumes to bind
error.
Also, when created with volumeClaimTemplates
PV names are useless and confusing as the start with PVC..
vincenzocalia@vincenzos-MacBook-Air server-node % kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
mg-pv 3Gi RWO Delete Available mg-sc 64s
pvc-32589cce-f472-40c9-b6e4-dc5e26c2177a 50Mi RWO Delete Bound default/mg-pv-cont-mongo-3 hostpath 36m
pvc-3e2f4e50-30f8-4ce8-8a62-0b923fd6aa79 50Mi RWO Delete Bound default/mg-pv-cont-mongo-1 hostpath 37m
pvc-8f4ff966-c30a-469f-a68d-ed579ef2a96f 50Mi RWO Delete Bound default/mg-pv-cont-mongo-4 hostpath 36m
pvc-9f8c933b-85d6-4024-8bd0-6668feee8757 50Mi RWO Delete Bound default/mg-pv-cont-mongo-2 hostpath 37m
pvc-d6c212f3-2391-4137-97c3-07836c90b8f3 50Mi RWO Delete Bound default/mg-pv-cont-mongo-0 hostpath 37m
vincenzocalia@vincenzos-MacBook-Air server-node % kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mg-pv-cont-mongo-0 Bound pvc-d6c212f3-2391-4137-97c3-07836c90b8f3 50Mi RWO hostpath 37m
mg-pv-cont-mongo-1 Bound pvc-3e2f4e50-30f8-4ce8-8a62-0b923fd6aa79 50Mi RWO hostpath 37m
mg-pv-cont-mongo-2 Bound pvc-9f8c933b-85d6-4024-8bd0-6668feee8757 50Mi RWO hostpath 37m
mg-pv-cont-mongo-3 Bound pvc-32589cce-f472-40c9-b6e4-dc5e26c2177a 50Mi RWO hostpath 37m
mg-pv-cont-mongo-4 Bound pvc-8f4ff966-c30a-469f-a68d-ed579ef2a96f 50Mi RWO hostpath 37m
mg-pvc Pending mg-sc 74s
Is there any way to get to set the volumeClaimTemplates
's PVs names as something more useful as when declaring a PV?
How to point volumeClaimTemplates
's PVs to an ssd as I'm doing for my scenario a?
Many thanks
apiVersion: v1
kind: PersistentVolume
metadata:
name: mg-pv
labels:
type: local
spec:
capacity:
storage: 3Gi
persistentVolumeReclaimPolicy: Delete
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
storageClassName: mg-sc
local:
path: /Volumes/ProjectsSSD/k8s_local_volumes/mongo/mnt/data/unt
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: mg-sc
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mg-pvc
spec:
storageClassName: mg-sc
# volumeName: mg-pv
resources:
requests:
# storage: 1Gi
storage: 50Mi
accessModes:
- ReadWriteOnce
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
role: mongo
environment: test
serviceName: 'mongo'
replicas: 5
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- '--bind_ip'
- all
- '--replSet'
- rs0
# - "--smallfiles"
# - "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mg-pv-cont
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: 'role=mongo,environment=test'
- name: KUBERNETES_MONGO_SERVICE_NAME
value: 'mongo'
### using volumes you have to have one persistent volume for each created pod..useful only for static set of pods
# volumes:
# - name: mg-pv-cont
# persistentVolumeClaim:
# claimName: mg-pvc
## volume claim templates create a claim for each created pos, so if scaling up or down the number of pod they¡ll clame their own space in the persistent volume.
volumeClaimTemplates:
- metadata:
name: mg-pv-cont # this binds
# name: mg-pv-pvc-template # same name as volumeMounts or it won't bind.
### Waiting for deployments to stabilize...
### - statefulset/mongo: Waiting for statefulset spec update to be observed...
spec:
# storageClassName: mg-sc
storageClassName: hostpath
accessModes: ['ReadWriteOnce']
resources:
requests:
storage: 50Mi
Ok, after fiddling with it a bit more and testing a couple more of configurations I found out that the PVC
to PV
binding happens in a 1:1 manner, so once the PV
has bound to a claim (either PVC
or volumeClaimTemplates
) no other claim can bind to it. So the solution is just to create many PV
s as many pods you expect to create. And some extra for scaling up and down the replicas of your StatefulSet
. Now in the volumeClaimTemplates: spec: storageClassName:
you can user the SC
you defined so the those PV
get used. No use for PVC
if using volumeClassTemplates
..it'd just create a claim that nobody uses..
Hope this will help others starting out in the Kubernetes world. Cheers.