mongodbkubernetesreplicationpersistent-volumeskubernetes-statefulset

mongodb statefulset pods restarting without any useful error during replication


I am trying to run a mongodb statefulset on minikube with replication but the pods keep restarting without any apparent reason. I have searched all over trying to debug this issue. Following is my statefulset and I am using NFS for PVC(this is for testing only).

MongoDB pod logs: https://www.toptal.com/developers/paste-gd/e1C6l8k2

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb
spec:
  selector:
    matchLabels:
      app: mongodb
  serviceName: "mongo"
  replicas: 3
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mongodb
        image: mongo
        #lifecycle:
        # postStart:
        #   exec:
        #     command: ["/bin/sh","-c","mongosh -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD< /tmp/init.js"]
        args: ["--config","/etc/mongod.conf","-vvv"]
        ports:
        - containerPort: 27017
        volumeMounts:
        - name: mongodb-data
          mountPath: /data
        - name: init-scripts
          mountPath: /docker-entrypoint-initdb.d
        - name: keys
          mountPath: /keys
        - name: config
          mountPath: /etc/mongod.conf
          subPath: mongod.conf
        - name: repinit
          mountPath: /tmp/initscripts/
        env:
          - name: MONGO_INITDB_ROOT_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongoadmin
                key: MONGO_INITDB_ROOT_USERNAME
          - name: MONGO_INITDB_ROOT_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongoadmin
                key: MONGO_INITDB_ROOT_PASSWORD
      volumes:
      - name: init-scripts
        configMap:
          name: dbfiles
      - name: keys
        secret:
          secretName: dbkey
          defaultMode: 0400
      - name: config
        configMap:
          name: dbconfig
      - name: repinit
        configMap:
           name: repinit

  volumeClaimTemplates:
  - metadata:
      name: mongodb-data
    spec:
      accessModes: ["ReadWriteMany"]
      storageClassName: nfs-csi
      resources:
        requests:
          storage: 5Gi

k get pods

NAME        READY   STATUS             RESTARTS        AGE
mongodb-0   0/1     CrashLoopBackOff   3 (16s ago)     2m8s
mongodb-1   0/1     CrashLoopBackOff   7 (2m15s ago)   25m
mongodb-2   0/1     CrashLoopBackOff   7 (2m16s ago)   25m
nginx       1/1     Running            0               28h

k describe pod mongodb-0

Name:             mongodb-0
Namespace:        default
Priority:         0
Service Account:  default
Node:             minikube/192.168.59.103
Start Time:       Sat, 17 Feb 2024 17:57:28 +0530
Labels:           app=mongodb
                  controller-revision-hash=mongodb-5dc867c4c7
                  statefulset.kubernetes.io/pod-name=mongodb-0
Annotations:      <none>
Status:           Running
IP:               10.244.0.76
IPs:
  IP:           10.244.0.76
Controlled By:  StatefulSet/mongodb
Containers:
  mongodb:
    Container ID:  docker://168537febead9f2c220e413174a7ab0710b4f1f5268a0ff23c74b19a5b98d3f3
    Image:         mongo:4.4.18
    Image ID:      docker-pullable://mongo@sha256:d23ec07162ca06646a6329c452643f37494af644d045c002a7b41873981c160d
    Port:          27017/TCP
    Host Port:     0/TCP
    Args:
      --config
      /etc/mongod.conf
    State:          Running
      Started:      Sat, 17 Feb 2024 18:01:51 +0530
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sat, 17 Feb 2024 18:00:06 +0530
      Finished:     Sat, 17 Feb 2024 18:00:25 +0530
    Ready:          True
    Restart Count:  5
    Environment:
      MONGO_INITDB_ROOT_USERNAME:  <set to the key 'MONGO_INITDB_ROOT_USERNAME' in secret 'mongoadmin'>  Optional: false
      MONGO_INITDB_ROOT_PASSWORD:  <set to the key 'MONGO_INITDB_ROOT_PASSWORD' in secret 'mongoadmin'>  Optional: false
    Mounts:
      /data from mongodb-data (rw)
      /docker-entrypoint-initdb.d from init-scripts (rw)
      /etc/mongod.conf from config (rw,path="mongod.conf")
      /keys from keys (rw)
      /tmp/init.js from repinit (rw,path="init.js")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lpq4q (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  mongodb-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongodb-data-mongodb-0
    ReadOnly:   false
  init-scripts:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dbfiles
    Optional:  false
  keys:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  dbkey
    Optional:    false
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      dbconfig
    Optional:  false
  repinit:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      repinit
    Optional:  false
  kube-api-access-lpq4q:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  4m32s                 default-scheduler  Successfully assigned default/mongodb-0 to minikube
  Normal   Pulled     119s (x5 over 4m27s)  kubelet            Container image "mongo:4.4.18" already present on machine
  Normal   Created    117s (x5 over 4m27s)  kubelet            Created container mongodb
  Normal   Started    114s (x5 over 4m26s)  kubelet            Started container mongodb
  Warning  BackOff    50s (x10 over 3m48s)  kubelet            Back-off restarting failed container mongodb in pod mongodb-0_default(14ff7d3c-e08a-47c4-828b-928cd017bd50)

Additionally, this setup works fine if I remove the config parameter.

Following is the init script for reference. I only want it to run on first pod of the statefullset.

const user = process.env["MONGO_INITDB_ROOT_USERNAME"]
const pass = process.env["MONGO_INITDB_ROOT_PASSWORD"]
db = connect('mongodb://'+user+':'+pass+'@localhost/admin');
let i = 0;
//var file = cat('/docker-entrypoint-initdb.d/users.json');
var file =  fs.readFileSync('/docker-entrypoint-initdb.d/users.json', 'utf8' )
var myusers = JSON.parse(file);
while (i < myusers.length) {
        if (db.getUser(myusers[i].user) == null) {
                db.createUser(myusers[i]);
        }
        i++;
}

db = db.getSiblingDB("testdb1");
if (db.test.countDocuments({}) == 0) {
        var datastr =  fs.readFileSync('/docker-entrypoint-initdb.d/db1.json','utf-8')
        var data = JSON.parse(datastr)
        db.test.insertMany(data)
}
db = db.getSiblingDB("testdb2")
if (db.test.countDocuments({}) == 0) {
        var datastr =  fs.readFileSync('/docker-entrypoint-initdb.d/db2.json')
        var data = JSON.parse(datastr)
        db.test.insertMany(data)
}

mongod.conf

replication:
  replSetName: "rs0"
security:
  keyFile: /keys/key

storage:
  dbPath: /data/db

systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

net:
  port: 27017
  bindIp: 0.0.0.0


processManagement:
  timeZoneInfo: /usr/share/zoneinfo

Solution

  • I figured out the issue. My keyfile permissions were incorrect. MongoDB requires keyfile to be 400 and be owned by mongodb(uid: 999). This turned out to be a challenge as there seems to be no way to change the owner of any file mounted through a configmap/secret.