kubernetescephfs

How to share a cephfs volume between pods in different k8s namespaces


I'm trying to share a cephfs volumes between namespaces within k8s cluster. I'm using ceph-csi with cephfs.

Followed https://github.com/ceph/ceph-csi/blob/devel/docs/static-pvc.md#cephfs-static-pvc to create static pv+pvc in both namespaces. Works if I don't launch both pods on same node.

If both pods on same node, Second pod get stuck with error:

MountVolume.SetUp failed for volume "team-test-vol-pv" : rpc error: code = Internal desc = failed to bind-mount /var/lib/kubelet/plugins/k
ubernetes.io/csi/pv/team-test-vol-pv/globalmount to /var/lib/kubelet/pods/007fc605-7fa4-4dc6-890f-fc0dabe5740b/volumes/kubernetes.io~csi/team-test-vol-pv/mount: an error (exit status 32) occurred while running mount arg
s: [-o bind,_netdev /var/lib/kubelet/plugins/kubernetes.io/csi/pv/team-test-vol-pv/globalmount /var/lib/kubelet/pods/007fc605-7fa4-4dc6-890f-fc0dabe5740b/volumes/kubernetes.io~csi/team-test-vol-pv/moun

Any ideas how to resolve this or how to use a single RWX volume in different NS?

PV+PVC for team-x:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-vol
  namespace: team-x
spec:
  storageClassName: ""
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  volumeMode: Filesystem
  # volumeName should be same as PV name
  volumeName: team-x-test-vol-pv

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: team-x-test-vol-pv
spec:
  claimRef:
    namespace: team-x
    name: test-vol
  storageClassName: ""
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 1Gi
  csi:
    driver: cephfs.csi.ceph.com
    nodeStageSecretRef:
      name: csi-cephfs-secret-hd
      namespace: ceph-csi
    volumeAttributes:
      "clusterID": "cd79ae11-1804-4c06-a97e-aeeb961b84b0"
      "fsName": "cephfs"
      "staticVolume": "true"
      "rootPath": /volumes/team/share/8b73d3bb-282e-4c32-b13a-97459419bd5b
    # volumeHandle can be anything, need not to be same
    # as PV name or volume name. keeping same for brevity
    volumeHandle: team-share
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem

PV+PVC for team-y

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-vol
  namespace: team-y
spec:
  storageClassName: ""
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  volumeMode: Filesystem
  # volumeName should be same as PV name
  volumeName: team-y-test-vol-pv

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: team-y-test-vol-pv
spec:
  claimRef:
    namespace: team-y
    name: test-vol
  storageClassName: ""
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 1Gi
  csi:
    driver: cephfs.csi.ceph.com
    nodeStageSecretRef:
      name: csi-cephfs-secret-hd
      namespace: ceph-csi
    volumeAttributes:
      "clusterID": "cd79ae11-1804-4c06-a97e-aeeb961b84b0"
      "fsName": "cephfs"
      "staticVolume": "true"
      "rootPath": /volumes/team-y/share/8b73d3bb-282e-4c32-b13a-97459419bd5b
    # volumeHandle can be anything, need not to be same
    # as PV name or volume name. keeping same for brevity
    volumeHandle: team-share
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem

Solution

  • Having volumeHandle: xyz unique for each pv done the trick. Tested deploying 3xdaemonsets in 3 different namespaces.