kubernetespersistent-volumesstatefulset

Volume is already exclusively attached to one node and can't be attached to another


I have a pretty simple Kubernetes pod. I want a stateful set and want the following process:

  1. I want to have an initcontainer download and uncompress a tarball from s3 into a volume mounted to the initcontainer
  2. I want to mount that volume to my main container to be used

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: app
  namespace: test
  labels:
    name: app
spec:
  serviceName: app
  replicas: 1
  selector:
    matchLabels:
      app: app
  template:
    metadata:
      labels:
        app: app
    spec:
      initContainers:
      - name: preparing
        image: alpine:3.8
        imagePullPolicy: IfNotPresent
        command:
          - "sh"
          - "-c"
          - |
            echo "Downloading data"
            wget https://s3.amazonaws.com/.........
            tar -xvzf xxxx-........ -C /root/
        volumeMounts:
        - name: node-volume
          mountPath: /root/data/

      containers:
      - name: main-container
        image: ecr.us-west-2.amazonaws.com/image/:latest
        imagePullPolicy: Always

        volumeMounts:
        - name: node-volume
          mountPath: /root/data/

  volumeClaimTemplates:
  - metadata:
      name: node-volume
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: gp2-b
      resources:
        requests:
          storage: 80Gi

I continue to get the following error:

At first I run this and I can see the logs flowing of my tarball being downloaded by the initcontainer. About half way done it terminates and gives me the following error:

Multi-Attach error for volume "pvc-faedc8" Volume is 
already exclusively attached to one node and can't be 
attached to another

Solution

  • Looks like you have a dangling PVC and/or PV that is attached to one of your nodes. You can ssh into the node and run a df or mount to check.

    If you look at this the PVCs in a StatefulSet are always mapped to their pod names, so it may be possible that you still have a dangling pod(?)

    If you have a dangling pod:

    $ kubectl -n test delete pod <pod-name>
    

    You may have to force it:

    $ kubectl -n test delete pod <pod-name> --grace-period=0 --force
    

    Then, you can try deleting the PVC and it's corresponding PV:

    $ kubectl delete pvc pvc-faedc8
    $ kubectl delete pv <pv-name>