I am currently setting up a MongoDB cluster using a Kubernetes StatefulSet. I have a functional configuration, but it currently runs with the MongoDB daemon starting as root. My goal is to start the StatefulSet with the following security context: runAsUser: 999 runAsGroup: 999
This corresponds to the 'mongodb' user in the official MongoDB image. For the replica set configuration, I want to use an authentication key.
In my current configuration, which runs as root, my authentication key is mounted into a file using a ConfigMap. When I change my security context, the key becomes unreadable by the MongoDB process because it needs to have permissions set to 400, and the file's owner and group must be 999 (mongodb).
I've tried using FsGroup, but it only changes the group ownership of the file; the owner remains root, and it doesn't work. With an init container, modifying the file permissions also doesn't work, and I receive a message stating that the filesystem is read-only.
Perhaps there is a solution or another way to mount a file with a specific owner inside a pod. Alternatively, I might be using the wrong methodology, which is possible since I'm relatively new to Kubernetes.
Thank you in advance for your assistance.
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongod
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
publishNotReadyAddresses: true
clusterIP: None
selector:
role: mongod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongod
spec:
serviceName: mongodb-service
replicas: 1
selector:
matchLabels:
role: mongod
template:
metadata:
labels:
role: mongod
replicaset: test
spec:
securityContext:
runAsUser: 999
runAsGroup: 999
fsGroup: 999
terminationGracePeriodSeconds: 15
volumes:
- name: mongodbkey
configMap:
name: mongodb-key
defaultMode: 0400
- name: mongodb-init
configMap:
name: mongodb-init
defaultMode: 0777
containers:
- name: mongod
image: mongo:7.0.2
command:
- /bin/sh
- -c
- |
/data/mongodbinit/mongo-user.sh &
mongod --replSet=test --bind_ip_all --auth --dbpath=/data/db --keyFile=/etc/mongodbkey/mongodb.key--setParameter=authenticationMechanisms=SCRAM-SHA-1;
envFrom:
- secretRef:
name: mongo-creds
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- |
if [ -f /data/db/admin-user.lock ]; then
if [ "$HOSTNAME" != "mongod-0" ]; then
mongosh mongod-0.mongodb-service.default.svc.cluster.local -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin --eval 'rs.remove("'$HOSTNAME'.mongodb-service.default.svc.cluster.local:27017")';
rm -f /data/db/admin-user.lock; fi; fi;
livenessProbe:
exec:
command:
- bash
- -c
- |
if [[ "$(mongosh localhost:27017 -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --authenticationDatabase=admin --eval 'db.adminCommand({ping:1}).ok' --quiet)" == "1" ]]; then
exit 0;
else exit 1;
fi
initialDelaySeconds: 90
periodSeconds: 60
failureThreshold: 3
timeoutSeconds: 5
readinessProbe:
exec:
command:
- bash
- -c
- |
if [[ "$(mongosh localhost:27017 --authenticationDatabase=admin --eval 'db.adminCommand({ping:1}).ok' --quiet)" == "1" ]]; then
exit 0;
else exit 1;
fi
initialDelaySeconds: 5
successThreshold: 1
periodSeconds: 30
timeoutSeconds: 5
resources:
requests:
cpu: 0.2
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-persistent-storage-claim
mountPath: /data/db
- name: mongodbkey
mountPath: /etc/mongodbkey/mongodb.key
subPath: mongodb.key
- name: mongodb-init
mountPath: /data/mongodbinit
volumeClaimTemplates:
- metadata:
name: mongodb-persistent-storage-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-key
data:
mongodb.key: |
aP+dcZjtBO3Qjvwm2oaxjhYsGrezOi6oV49+dbpON8mTiyW0Zr359EUCYcLn/QcU
3jPmPDnil1mdoIevcz1z/GF17gbmFVBFR9H24DMAeDzxzHhVusVp0NN6dn0mhCcO
vQUaG9uTxKPAthkwG1JJJQwmzwhc5cXIq51hY5Ea6IoAWmxebvqohRjWf8KjXXE6
ji0XguppRKEHk4LEZMoeztjAaSKFn/DF9IXuraHtR1RqViDbExQYc9Db/obUS1jN
80dR7GpvE5IExju3uSSOv1LkjVgWHz/02sdk8qc9p/R6Zua6D/HuCI6OFYJ2btYq
bIJ4rpphmI06d64XFzWexQeLpOSGBJQt+630U7GzcQHM9ARUYybrSoQ39i3w1Wci
lMZq96AGh24WVWenIgKons0BOW4LH2RljUHl7XeSC3HC+3DtqHIuUa5I9JiAp0Ch
8xnAcJuWr0ZQJDHr8iOJCWteOr+CMKy/UTgI96xYsq36YV+Ch/swNBVfHVsZQoIj
jqGNh3EhZ266wZBUXrL7xfimxCvQlEGcklbew3WyIRBrWsIXm3Wf8KTz4wmYp4Cs
petISeEAr2Lh2Bv2SBDXPF4RCFEfgDJvszPUHAkMTp2khO22s08ohYfq8ebMx6hC
U3AnnbGtFqQ3PUdtWstPKoRLWp3cM6hc6rRECIpkJhzkPT9J2cmtwxLwvlKoSDZw
UzZFptrD9I19xNP5QGacmIJCwFIm0Wwnv1bZUKz5+JopqyAPoTeq1k9J3bq2fRIX
2ArTDUPqluqzlNI1PBM6+5A58arR+eSNQMfmOKfQVA6LV5kQruJrolFQIwqsnBwy
Hf/OlHSjYR7gCRpd6Mh+O3hgd1baq+DnQgSGbsc+5qJks9sC4c0i1JIggM09+90g
EMaS4C5m7YTgnHGUPDMS2iSr+lsxurrhR8h/3vU4JBMuGQysQpTpv1VpYxIFUhyq
fCTVBz7aLengL4DxgEqwfLoDtIrMeMZuEyblNnH9G3oEBxnvJpY83vfHweVrneS7
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-init
data:
mongo-user.sh: |
#!/bin/bash
if [ ! -f /data/db/admin-user.lock ]; then
sleep 45;
if [ "$HOSTNAME" = "mongod-0" ]; then
mongosh admin --eval 'rs.initiate({_id: "test", members: [{ _id: 0, host: "mongod-0.mongodb-service.default.svc.cluster.local" }]})';
else
sleep 40;
mongosh mongod-0.mongodb-service.default.svc.cluster.local -u ${MONGO_INITDB_ROOT_USERNAME} -p ${MONGO_INITDB_ROOT_PASSWORD} --authenticationDatabase=admin --eval 'rs.add( { host: "'${HOSTNAME}'.mongodb-service.default.svc.cluster.local" } )'; fi;
touch /data/db/admin-user.lock;
fi;
---
apiVersion: v1
data:
MONGO_INITDB_ROOT_USERNAME: dXNlcjE=
MONGO_INITDB_ROOT_PASSWORD: dXNlcjE=
kind: Secret
metadata:
creationTimestamp: null
name: mongo-creds
Not an answer to your question but would be relevant anyway:
You should not do anything manually in dbPath
(i.e. /data/db/
) folder. What is the reason for checking/deleting/creating /data/db/admin-user.lock
? If you like to check whether the replica set has been initiated already, you can use
ret=$( mongosh --eval 'try { rs.status() } catch (e) { print(e.codeName) }' )
[[ "$ret" == "NotYetInitialized" ]] && mongosh admin --eval 'rs.initiate(...)'
And as already mentioned in MongoDB 7.02 starts without replicaset while has Readiness Probe in Openshift 3.11 you should add
while (! db.hello().isWritablePrimary ) sleep(1000)
after rs.initiate()
, otherwise the mongosh
may terminate before replica set has been fully initiated.