yugabytedb

cgroup issue when starting YugabyteDB in EKS cluster


I successfully made a trial installation of yugabytedb on an AKS cluster, using the yugabyte-k8s-operator, following guidance on https://github.com/yugabyte/yugabyte-k8s-operator/blob/main/README.md.

The cluster used D4as_v5(4vCPU/16GiB) nodes and Kubernetes version was 1.30.10. Now I tried to install the same on AKS 3-node cluster each node D16as_v5(16vCPU/64GiB) and it is failing.

Three Containers (yb-cleanup, yugabyted-ui, yb-controller) in the ybyugabyte-tserver pod started successfully but yb-tserver container did not start.

Similarly, two containers (yb-cleanup, yugabyted-ui) in the ybyugabyte-master pod started successfully but yb-master container did not start. The Error in both cases is:

message: >-
  Error: failed to create containerd task: failed to create shim task: OCI
  runtime create failed: runc create failed: unable to start container process:
  error during container init: error setting cgroup config for procHooks
  process: openat2
  /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a7315c9_787f_4ddd_12c1_9ecb3457be4b.slice/cri-containerd-yb-tserver.scope/cpu.weight:
  no such file or directory: unknown

My research seem to suggest that this is the result of in compatibility with cgroup version 2 (of the operator and/or ybtserver itself). WHAT CHANGED? 1- The default Kubernetes version on aks now is 1.31.7 not the l.30.10 that succeeded for me earlier. 2- I suspect the version of the underlying Ubuntu also changed recently (currently 22.04.5 LTS). RESOLUTION ATTEMPTS

1- I tried a fresh install, but deliberately chose Kubernetes version 1.30.0 . The same problem persisted, suggesting that kubernetes version is not the only issue. 2- I tried the Azure recommended fix (to cgroup v2 issues) which is to apply a daemonset to revert cgroup to v1, by applying a YAML file (kubectl apply -f https://raw.githubusercontent.com/Azure/AKS/master/examples/cgroups/revert-cgroup-v1.yaml). This did not resolve the problem but produced another one:

message(yb-tserver/master): >-
  Error: failed to create containerd task: failed to create shim task: OCI
  runtime create failed: runc create failed: unable to start container process:
  error during container init: error setting cgroup config for procHooks
  process: unable to set memory limit to 8 (current usage: 7831552, peak usage:
  8044544): unknown

FIX REQUEST 1- Short Term: Is there a yugabyte-k8s-operator located quick-fix to this? 2- Long Term: Can yugabyte-k8s-operator be updated to support cgroup v1 and v2? 3- What are my options otherwise?

Logs below:

1) LOGS: I could not get yb-tserver logs after different approaches:

mycluster [ ~/yugabyte ]$ kubectl logs -n yugabyte-operator -l app=yb-tserver -c yb-tserver
No resources found in yugabyte-operator namespace.
mycluster [ ~/yugabyte ]$
mycluster [ ~/yugabyte ]$ kubectl logs -n yugabyte-operator -l app.kubernetes.io/name=yb-tserver -c yb-tserver
mycluster [ ~/yugabyte ]$
mycluster [ ~/yugabyte ]$ kubectl logs -n yugabyte-operator -l app.kubernetes.io/name=yb-tserver 
Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup, yugabyted-ui, yb-controller
Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup, yugabyted-ui, yb-controller
Defaulted container "yb-tserver" out of: yb-tserver, yb-cleanup, yugabyted-ui, yb-controller
mycluster [ ~/yugabyte ]$
mycluster [ ~/yugabyte ]$ kubectl logs ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0 \
  -n yugabyte-operator \
  -c yb-tserver
mycluster [ ~/yugabyte ]$
mycluster [ ~/yugabyte ]$ kubectl get pod ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0   -n yugabyte-operator   -o jsonpath='{.spec.containers[*].name}'
yb-tserver yb-cleanup yugabyted-ui yb-controller
mycluster [ ~/yugabyte ]$
mycluster [ ~/yugabyte ]$ kubectl get pod ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0 -n yugabyte-operator -o wide
NAME                                            READY   STATUS             RESTARTS      AGE     IP            NODE                                NOMINATED NODE   READINESS GATES
ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0   3/4     CrashLoopBackOff   4 (28s ago)   2m11s   10.213.1.30   aks-agentpool-24911879-vmss000000   <none>           <none>
mycluster [ ~/yugabyte ]$  
mycluster [ ~/yugabyte ]$ kubectl logs ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0 \
  -n yugabyte-operator \
  -c yb-tserver \
  --previous
mycluster [ ~/yugabyte ]$  
mycluster [ ~/yugabyte ]$ kubectl logs -n yugabyte-operator yugabyte-operator-yugabyte-k8s-operator-0 -c operator
error: container operator is not valid for pod yugabyte-operator-yugabyte-k8s-operator-0
mycluster [ ~/yugabyte ]$ 

2) MAYBE THE POD DESCRIPTION WILL OFFER SOME INSIGHTS:

mycluster [ ~/yugabyte ]$ kubectl describe pod ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0 -n yugabyte-operator
Name:             ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0
Namespace:        yugabyte-operator
Priority:         0
Service Account:  default
Node:             aks-agentpool-24911879-vmss000000/10.224.0.4
Start Time:       Wed, 30 Apr 2025 10:10:49 +0000
Labels:           app.kubernetes.io/name=yb-tserver
                  app.kubernetes.io/part-of=yugabyte-cluster-63523990
                  apps.kubernetes.io/pod-index=0
                  chart=yugabyte
                  component=yugabytedb
                  controller-revision-hash=ybyugabyte-cl-edencentral-1-cdle-yb-tserver-6447c94555
                  heritage=Helm
                  release=ybyugabyte-cl-edencentral-1-cdle
                  statefulset.kubernetes.io/pod-name=ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0
                  yugabyte.io/universe-name=yugabyte-cluster-63523990
                  yugabyte.io/zone=swedencentral-1
                  yugabytedUi=true
Annotations:      checksum/gflags: 9bae712b19fcc90395b39a664868f14f6306dd1bea1bd70ba38c13d86254e980
                  checksum/rootCA: 6b33f90c3d2f14d8c0e047ad1a6b713153aa883964779c9bc3689c85b99c78d6
Status:           Running
IP:               10.213.1.30
IPs:
  IP:           10.213.1.30
Controlled By:  StatefulSet/ybyugabyte-cl-edencentral-1-cdle-yb-tserver
Containers:
  yb-tserver:
    Container ID:  containerd://5702c499512af981e2e5559efe811d63b7610c532d3171180253f2c938649b9e
    Image:         yugabytedb/yugabyte:2024.2.0.0-b145
    Image ID:      docker.io/yugabytedb/yugabyte@sha256:b64215cfa7a2f6699190421e18aa22e0316c803fa6f3f5df6c6cac98b59f6813
    Ports:         9000/TCP, 12000/TCP, 11000/TCP, 13000/TCP, 9100/TCP, 6379/TCP, 9042/TCP, 5433/TCP, 15433/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
    Command:
      /sbin/tini
      --
    Args:
      /bin/bash
      -c
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        /home/yugabyte/tools/k8s_preflight.py all
      fi && \
      echo "disk check at: $(date)" \
        | tee "/mnt/disk0/disk.check" "/mnt/disk1/disk.check" \
        && sync "/mnt/disk0/disk.check" "/mnt/disk1/disk.check" && \
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="${HOSTNAME}.ybyugabyte-cl-edencentral-1-cdle-yb-tservers.${NAMESPACE}.svc.cluster.local" \
          --port="9100"
      fi && \
      
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="${HOSTNAME}.ybyugabyte-cl-edencentral-1-cdle-yb-tservers.${NAMESPACE}.svc.cluster.local:9100" \
          --port="9100"
      fi && \
      
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="0.0.0.0" \
          --port="9000"
      fi && \
      
      if [[ -f /home/yugabyte/tools/k8s_parent.py ]]; then
        k8s_parent="/home/yugabyte/tools/k8s_parent.py"
      else
        k8s_parent=""
      fi && \
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="${HOSTNAME}.ybyugabyte-cl-edencentral-1-cdle-yb-tservers.${NAMESPACE}.svc.cluster.local" \
          --port="9042"
      fi && \
      
      if [ -f /home/yugabyte/tools/k8s_preflight.py ]; then
        PYTHONUNBUFFERED="true" /home/yugabyte/tools/k8s_preflight.py \
          dnscheck \
          --addr="0.0.0.0:5433" \
          --port="5433"
      fi && \
      
        mkdir -p /tmp/yugabyte/tserver/conf && \
        envsubst < /opt/tserver/conf/server.conf.template > /tmp/yugabyte/tserver/conf/server.conf && \
        exec ${k8s_parent} /home/yugabyte/bin/yb-tserver \
          --flagfile /tmp/yugabyte/tserver/conf/server.conf
      
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       StartError
      Message:      failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error setting cgroup config for procHooks process: openat2 /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6d46a43_fa1e_4e19_982f_190fd8893aeb.slice/cri-containerd-5702c499512af981e2e5559efe811d63b7610c532d3171180253f2c938649b9e.scope/cpu.weight: no such file or directory: unknown
      Exit Code:    128
      Started:      Thu, 01 Jan 1970 00:00:00 +0000
      Finished:     Wed, 30 Apr 2025 10:14:02 +0000
    Ready:          False
    Restart Count:  5
    Limits:
      cpu:     3
      memory:  8
    Requests:
      cpu:     2
      memory:  6
    Liveness:  exec [bash -v -c echo "disk check at: $(date)" \
  | tee "/mnt/disk0/disk.check" "/mnt/disk1/disk.check" \
  && sync "/mnt/disk0/disk.check" "/mnt/disk1/disk.check";
exit_code="$?";
echo "disk check exited with: ${exit_code}";
exit "${exit_code}"
] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_IP:                  (v1:status.podIP)
      HOSTNAME:               ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0 (v1:metadata.name)
      NAMESPACE:              yugabyte-operator (v1:metadata.namespace)
      YBDEVOPS_CORECOPY_DIR:  /mnt/disk0/cores
      SSL_CERTFILE:           /root/.yugabytedb/root.crt
    Mounts:
      /mnt/disk0 from ybyugabyte-cl-edencentral-1-cdle-datadir0 (rw)
      /mnt/disk1 from ybyugabyte-cl-edencentral-1-cdle-datadir1 (rw)
      /opt/certs/yugabyte from ybyugabyte-cl-edencentral-1-cdle-yb-tserver-tls-cert (ro)
      /opt/debug_hooks_config from debug-hooks-volume (rw)
      /opt/tserver/conf from tserver-gflags (rw)
      /root/.yugabytedb/ from ybyugabyte-cl-edencentral-1-cdle-client-tls (ro)
      /tmp from tserver-tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86phx (ro)
  yb-cleanup:
    Container ID:  containerd://9d1396382c7206fb15cd19142b45e7db4c7a7862dd180c23cb58d683f5872d7b
    Image:         yugabytedb/yugabyte:2024.2.0.0-b145
    Image ID:      docker.io/yugabytedb/yugabyte@sha256:b64215cfa7a2f6699190421e18aa22e0316c803fa6f3f5df6c6cac98b59f6813
    Port:          <none>
    Host Port:     <none>
    Command:
      /sbin/tini
      --
    Args:
      /bin/bash
      -c
      while true; do
        sleep 3600;
        /home/yugabyte/scripts/log_cleanup.sh;
      done
      
    State:          Running
      Started:      Wed, 30 Apr 2025 10:10:53 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      USER:  yugabyte
    Mounts:
      /home/yugabyte/ from ybyugabyte-cl-edencentral-1-cdle-datadir0 (rw,path="yb-data")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86phx (ro)
      /var/yugabyte/cores from ybyugabyte-cl-edencentral-1-cdle-datadir0 (rw,path="cores")
  yugabyted-ui:
    Container ID:  containerd://e6e7815ce668918fee7f425741f686ada0959662505da0e2a9505c407767d4f0
    Image:         yugabytedb/yugabyte:2024.2.0.0-b145
    Image ID:      docker.io/yugabytedb/yugabyte@sha256:b64215cfa7a2f6699190421e18aa22e0316c803fa6f3f5df6c6cac98b59f6813
    Port:          <none>
    Host Port:     <none>
    Command:
      /sbin/tini
      --
    Args:
      /bin/bash
      -c
      while true; do
      /home/yugabyte/bin/yugabyted-ui \
        -database_host=${HOSTNAME}.ybyugabyte-cl-edencentral-1-cdle-yb-tservers.${NAMESPACE}.svc.cluster.local \
        -bind_address=0.0.0.0 \
        -ysql_port=5433 \
        -ycql_port=9042 \
        -master_ui_port=7000 \
        -tserver_ui_port=9000 \
        -secure=true \
      || echo "ERROR: yugabyted-ui failed. This might be because your yugabyte \
      version is older than 2.21.0. If this is the case, set yugabytedUi.enabled to false \
      in helm to disable yugabyted-ui, or upgrade to a version 2.21.0 or newer."; \
      echo "Attempting restart in 30s."
      trap break TERM INT; \
      sleep 30s & wait; \
      trap - TERM INT;
      done \
      
    State:          Running
      Started:      Wed, 30 Apr 2025 10:10:53 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      HOSTNAME:   ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0 (v1:metadata.name)
      NAMESPACE:  yugabyte-operator (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86phx (ro)
  yb-controller:
    Container ID:  containerd://583ad1e1a88a9f4099bdcf11875c4938ed148c4b6de640b0a08f11a1a6c2f497
    Image:         yugabytedb/yugabyte:2024.2.0.0-b145
    Image ID:      docker.io/yugabytedb/yugabyte@sha256:b64215cfa7a2f6699190421e18aa22e0316c803fa6f3f5df6c6cac98b59f6813
    Port:          18018/TCP
    Host Port:     0/TCP
    Command:
      /sbin/tini
      --
    Args:
      /bin/bash
      -c
      while true; do
        sleep 60;
        /home/yugabyte/tools/k8s_ybc_parent.py status || /home/yugabyte/tools/k8s_ybc_parent.py start;
      done
      
    State:          Running
      Started:      Wed, 30 Apr 2025 10:10:53 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt/disk0 from ybyugabyte-cl-edencentral-1-cdle-datadir0 (rw)
      /mnt/disk1 from ybyugabyte-cl-edencentral-1-cdle-datadir1 (rw)
      /opt/certs/yugabyte from ybyugabyte-cl-edencentral-1-cdle-yb-tserver-tls-cert (ro)
      /tmp from tserver-tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-86phx (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  ybyugabyte-cl-edencentral-1-cdle-datadir0:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  ybyugabyte-cl-edencentral-1-cdle-datadir0-ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0
    ReadOnly:   false
  ybyugabyte-cl-edencentral-1-cdle-datadir1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  ybyugabyte-cl-edencentral-1-cdle-datadir1-ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0
    ReadOnly:   false
  debug-hooks-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      ybyugabyte-cl-edencentral-1-cdle-tserver-hooks
    Optional:  false
  tserver-gflags:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ybyugabyte-cl-edencentral-1-cdle-tserver-gflags
    Optional:    false
  tserver-tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  ybyugabyte-cl-edencentral-1-cdle-yb-tserver-tls-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ybyugabyte-cl-edencentral-1-cdle-yb-tserver-tls-cert
    Optional:    false
  ybyugabyte-cl-edencentral-1-cdle-client-tls:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ybyugabyte-cl-edencentral-1-cdle-client-tls
    Optional:    false
  kube-api-access-86phx:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  3m28s                  default-scheduler  Successfully assigned yugabyte-operator/ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0 to aks-agentpool-24911879-vmss000000
  Normal   Started    3m24s                  kubelet            Started container yugabyted-ui
  Normal   Pulled     3m24s                  kubelet            Container image "yugabytedb/yugabyte:2024.2.0.0-b145" already present on machine
  Normal   Created    3m24s                  kubelet            Created container: yb-cleanup
  Normal   Started    3m24s                  kubelet            Started container yb-cleanup
  Normal   Pulled     3m24s                  kubelet            Container image "yugabytedb/yugabyte:2024.2.0.0-b145" already present on machine
  Normal   Created    3m24s                  kubelet            Created container: yugabyted-ui
  Normal   Pulled     3m24s                  kubelet            Container image "yugabytedb/yugabyte:2024.2.0.0-b145" already present on machine
  Normal   Created    3m24s                  kubelet            Created container: yb-controller
  Normal   Started    3m24s                  kubelet            Started container yb-controller
  Normal   Created    3m (x3 over 3m24s)     kubelet            Created container: yb-tserver
  Warning  Failed     3m (x3 over 3m24s)     kubelet            Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error setting cgroup config for procHooks process: openat2 /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6d46a43_fa1e_4e19_982f_190fd8893aeb.slice/cri-containerd-yb-tserver.scope/cpu.weight: no such file or directory: unknown
  Warning  BackOff    2m43s (x6 over 3m22s)  kubelet            Back-off restarting failed container yb-tserver in pod ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0_yugabyte-operator(a6d46a43-fa1e-4e19-982f-190fd8893aeb)
  Normal   Pulled     2m28s (x4 over 3m25s)  kubelet            Container image "yugabytedb/yugabyte:2024.2.0.0-b145" already present on machine
mycluster [ ~/yugabyte ]$ 


3) THIS IS THE CRD: I used integers for the resource specs; my prior installation attempts failed because they are not integers.

cat > yugabyte-cluster.yaml << EOF

apiVersion: operator.yugabyte.io/v1alpha1                                                               
kind: YBUniverse                                                                                        
metadata:                                                                                               
  name: yugabyte-cluster
  namespace: yugabyte-operator
  labels:
    app: yugabyte                                                          
spec:                                                                                                                                                                               
  numNodes:    3                                                                                        
  replicationFactor:  3                                                                                 
  enableYSQL: true                                                                                      
  enableNodeToNodeEncrypt: true                                                                         
  enableClientToNodeEncrypt: true                                                                       
  enableLoadBalancer: false 
  ybSoftwareVersion: "2024.2.0.0-b145" 
  enableYSQLAuth: true
  enableYCQL: true                                                                                      
  enableYCQLAuth: true
  gFlags:
    tserverGFlags:
      redis_proxy_bind_address: "0.0.0.0:6379"
      start_redis_proxy: "true"                                                                                   
    masterGFlags: {}                                                                                    
  deviceInfo:                                                                                           
    volumeSize: 128                                                                                     
    numVolumes: 2                                                                                       
    storageClass: "managed-csi-premium-retain"
  kubernetesOverrides:                                                                                  
    resource:                                                                                           
      master:                                                                                           
        requests:                                                                                       
          cpu: 1                                                                                        
          memory: 2                                                                                   
        limits:                                                                                         
          cpu: 2                                                                                        
          memory: 3
      tserver:
        requests:
          cpu: 2
          memory: 6                                                                                 
        limits:                                                                                         
          cpu: 3                                                                                        
          memory: 8    
EOF

4) MY INSTALLATION STEPS:

mycluster [ ~/yugabyte ]$ kubectl apply -f yugabyte-cluster.yaml -n yugabyte-operator
ybuniverse.operator.yugabyte.io/yugabyte-cluster created
mycluster [ ~/yugabyte ]$ kubectl get pod --namespace yugabyte-operator
kubectl get pvc --namespace yugabyte-operator
kubectl get pv --namespace yugabyte-operator
NAME                                            READY   STATUS              RESTARTS   AGE
ybyugabyte-cl-edencentral-1-cdle-yb-master-0    0/3     ContainerCreating   0          9s
ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0   0/4     ContainerCreating   0          9s
ybyugabyte-cl-edencentral-2-ddle-yb-master-0    0/3     ContainerCreating   0          9s
ybyugabyte-cl-edencentral-2-ddle-yb-tserver-0   0/4     ContainerCreating   0          9s
ybyugabyte-cl-edencentral-3-edle-yb-master-0    0/3     ContainerCreating   0          9s
ybyugabyte-cl-edencentral-3-edle-yb-tserver-0   0/4     ContainerCreating   0          9s
yugabyte-operator-yugabyte-k8s-operator-0       2/2     Running             0          31m
NAME                                                                                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                 VOLUMEATTRIBUTESCLASS   AGE
ybyugabyte-cl-edencentral-1-cdle-datadir0-ybyugabyte-cl-edencentral-1-cdle-yb-master-0    Bound    pvc-041d5a45-4eb7-457e-b37d-efcfe33d2dfa   50Gi       RWO            managed-csi-premium-retain   <unset>                 10s
ybyugabyte-cl-edencentral-1-cdle-datadir0-ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0   Bound    pvc-d047e549-014c-4cc8-88d5-0358e44a34b1   128Gi      RWO            managed-csi-premium-retain   <unset>                 9s
ybyugabyte-cl-edencentral-1-cdle-datadir1-ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0   Bound    pvc-90416902-24c2-43ff-9b95-b4129c9facff   128Gi      RWO            managed-csi-premium-retain   <unset>                 10s
ybyugabyte-cl-edencentral-2-ddle-datadir0-ybyugabyte-cl-edencentral-2-ddle-yb-master-0    Bound    pvc-f2eafcec-2666-4e09-a751-2378b88c5f61   50Gi       RWO            managed-csi-premium-retain   <unset>                 9s
ybyugabyte-cl-edencentral-2-ddle-datadir0-ybyugabyte-cl-edencentral-2-ddle-yb-tserver-0   Bound    pvc-1401bb26-d6f2-4390-bb2e-b4fe7818ded4   128Gi      RWO            managed-csi-premium-retain   <unset>                 9s
ybyugabyte-cl-edencentral-2-ddle-datadir1-ybyugabyte-cl-edencentral-2-ddle-yb-tserver-0   Bound    pvc-bb33784d-cb46-4b74-98eb-c7b54587c786   128Gi      RWO            managed-csi-premium-retain   <unset>                 9s
ybyugabyte-cl-edencentral-3-edle-datadir0-ybyugabyte-cl-edencentral-3-edle-yb-master-0    Bound    pvc-ac15e2e9-ee13-4237-acd5-bc2149d2e0b0   50Gi       RWO            managed-csi-premium-retain   <unset>                 10s
ybyugabyte-cl-edencentral-3-edle-datadir0-ybyugabyte-cl-edencentral-3-edle-yb-tserver-0   Bound    pvc-db94b81b-31a1-4e08-a173-3e07ce6a4d1e   128Gi      RWO            managed-csi-premium-retain   <unset>                 10s
ybyugabyte-cl-edencentral-3-edle-datadir1-ybyugabyte-cl-edencentral-3-edle-yb-tserver-0   Bound    pvc-ef959513-6fd2-449a-a117-29e358766109   128Gi      RWO            managed-csi-premium-retain   <unset>                 9s
yugabyte-operator-yugaware-storage                                                        Bound    pvc-f48e4f60-8e1f-4ded-a709-f476e21138eb   100Gi      RWO            default                      <unset>                 31m
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                                                                                       STORAGECLASS                 VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-041d5a45-4eb7-457e-b37d-efcfe33d2dfa   50Gi       RWO            Retain           Bound    yugabyte-operator/ybyugabyte-cl-edencentral-1-cdle-datadir0-ybyugabyte-cl-edencentral-1-cdle-yb-master-0    managed-csi-premium-retain   <unset>                          6s
pvc-1401bb26-d6f2-4390-bb2e-b4fe7818ded4   128Gi      RWO            NAME                                            READY   STATUS              RESTARTS      AGE
ybyugabyte-cl-edencentral-1-cdle-yb-master-0    2/3     RunContainerError   3 (4s ago)    105s
ybyugabyte-cl-edencentral-1-cdle-yb-tserver-0   3/4     CrashLoopBackOff    2 (26s ago)   105s
ybyugabyte-cl-edencentral-2-ddle-yb-master-0    2/3     CrashLoopBackOff    3 (17s ago)   105s
ybyugabyte-cl-edencentral-2-ddle-yb-tserver-0   3/4     CrashLoopBackOff    3 (22s ago)   105s
ybyugabyte-cl-edencentral-3-edle-yb-master-0    2/3     RunContainerError   2 (2s ago)    105s
ybyugabyte-cl-edencentral-3-edle-yb-tserver-0   3/4     CrashLoopBackOff    1 (20s ago)   105s
yugabyte-operator-yugabyte-k8s-operator-0       2/2     Running             0             33m
mycluster [ ~/yugabyte ]$ 

Solution

  •        Restart Count:  5
        Limits:
          cpu:     3
          memory:  8
        Requests:
          cpu:     2
          memory:  6
        Liveness:  exec [bas
    

    This is the issue. These values for memory are too low, it has to be 6Gi.