amazon-web-serviceskuberneteskubernetes-helmamazon-eks

Kubernetes stateful sets storage in AWS EBM with Helm


I am deploying a stateful set with Helm and the pods are complaining about volumes.

What is the proper way of doing this with AWS EBS? Considering the Helm templates.

Warning  FailedScheduling  30s (x112 over 116m)  default-scheduler  0/9 nodes are available: 9 pod has unbound immediate PersistentVolumeClaims.

deployment.yaml

volumeClaimTemplates:
  - metadata:
      name: {{ .Values.storage.name }}
      labels:
        app: {{ template "etcd.name" . }}
        chart: {{ .Chart.Name }}-{{ .Chart.Version }}
        release: {{ .Release.Name }}
        heritage: {{ .Release.Service }}
    spec:
      storageClassName: {{ .Values.storage.class | default .Values.global.storage.class }}
      accessModes:
        - {{ .Values.storage.accessMode }}
      resources:
        requests:
          storage: {{ .Values.storage.size }}

values.yaml

storage:
  name: etcd-data
  mountPath: /somepath/etcd
  class: "default"
  size: 1Gi
  accessMode: ReadWriteOnce

Solution

  • Try change the class name to the default name on EKS:

    ...
    spec:
      storageClassName: {{ .Values.storage.class | default "gp2" | quote }}
      accessModes:
      - ...
    
    
    storage:
      ...
      class: "gp2"
      ...