kuberneteskubernetes-statefulset

K8s - cant upgrade the statefulset API before K8s upgrade


I am upgrading K8s from 1.15 to 1.16. Before I do it, I must migrate my statefulset yaml API to the apps/v1 version. But K8s doesn't allow me to do it.

The previous version of yamls is here (the variables are stored in another file):

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: {{ .NAME_KAFKA }}
  namespace: {{ .NS }}
spec:
  serviceName: {{ .NAME_KAFKA  }}-service
  replicas: {{ .CLUSTER_SIZE_KAFKA }}
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: {{ .NAME_KAFKA }}
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9009"
    spec:
      priorityClassName: {{ .PRIORITY_HIGHEST }}
      nodeSelector:
        lifecycle: OnDemand
      terminationGracePeriodSeconds: 301
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - {{ .NAME_KAFKA }}
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: local-{{ .NAME_KAFKA }}
        imagePullPolicy: Always
        image: {{ .REPO }}/{{ .IMAGE_KAFKA }}:{{ .VERSION_KAFKA }}
        resources:
          requests:
            memory: 768Mi
            cpu: 500m
          limits:
            memory: 768Mi
            cpu: 500m
        ports:
        - containerPort: 9092
          name: server
        - containerPort: 9009
          name: prometheus
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/kafka
        env:
        - name: KAFKA_HEAP_OPTS
          value : "-Xmx512M -Xms512M"
        - name: KAFKA_OPTS
          value: "-Dlogging.level=INFO"
        readinessProbe:
          tcpSocket:
            port: 9092
          initialDelaySeconds: 60
          timeoutSeconds: 5
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: gp2
      resources:
        requests:
          storage: 20Gi

I change the api-version from apps/v1beta1 to apps/v1 in the yaml file and try to apply. Predictably, I receive this error.

error: error validating "STDIN": error validating data: ValidationError(StatefulSet.spec): missing required field "selector" in io.k8s.api.apps.v1.StatefulSetSpec; if you choose to ignore these errors, turn validation off with --validate=false

So I add the field StatefulSet.spec.selector

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: {{ .NAME_KAFKA }}
  namespace: {{ .NS }}
spec:
  selector:
    matchExpressions:
      - key: "app"
        operator: In
        values:
          - {{ .NAME_KAFKA }}
  serviceName: {{ .NAME_KAFKA  }}-service
  replicas: {{ .CLUSTER_SIZE_KAFKA }}
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: {{ .NAME_KAFKA }}
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9009"
    spec:
      priorityClassName: {{ .PRIORITY_HIGHEST }}
      nodeSelector:
        lifecycle: OnDemand
      terminationGracePeriodSeconds: 301
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - {{ .NAME_KAFKA }}
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: local-{{ .NAME_KAFKA }}
        imagePullPolicy: Always
        image: {{ .REPO }}/{{ .IMAGE_KAFKA }}:{{ .VERSION_KAFKA }}
        resources:
          requests:
            memory: 768Mi
            cpu: 500m
          limits:
            memory: 768Mi
            cpu: 500m
        ports:
        - containerPort: 9092
          name: server
        - containerPort: 9009
          name: prometheus
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/kafka
        env:
        - name: KAFKA_HEAP_OPTS
          value : "-Xmx512M -Xms512M"
        - name: KAFKA_OPTS
          value: "-Dlogging.level=INFO"
        readinessProbe:
          tcpSocket:
            port: 9092
          initialDelaySeconds: 60
          timeoutSeconds: 5
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: gp2
      resources:
        requests:
          storage: 20Gi

But when I try to apply it, I receive this error:

The StatefulSet "kafka-name" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

According to K8s documentation, there should be a way to update the API in this yaml without rebuilding all the stateful sets in K8s. But how can I do it?


Solution

  • When upgraded to 1.16, you can convert your resources.

    As written in the blog you can use kubectl convert. The migration recommend convert. See kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16

    kubectl convert -f ./my-statefulset.yaml --output-version apps/v1
    

    Another option is to delete the statefulset with cascade=false.

    It deletes the statefulset and keeps the pods up and running. Now you should be able again to apply the new statefulset file.

    kubectl delete statefulset/<your-statefulset> --cascade=false