kubernetesrabbitmqcephfs

Error - Unable to attach or mount volumes: unmounted volumes=[data]


I have had weird problems in kubernetes. When I run install command, pods never started. Pvc was bound. It gave errors below order

0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.

Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[rabbitmq-token-xl9kq configuration data]: timed out waiting for the condition

attachdetach-controller  AttachVolume.Attach failed for volume "pvc-08de562a-2ee2-4c81-9b34-d58736b48120" : attachdetachment timeout for volume 0001-0009-rook-ceph-0000000000000001-83154669-0997-11eb-a1ec-726af9b2e1e1

Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[configuration data rabbitmq-token-xl9kq]: timed out waiting for the condition


I installed rabbitmq over helm.

helm install rabbitmq --namespace rabbitmq -f rabbitmq-values.yaml bitnami/rabbitmq

Here is my rabbitmq_values.yaml file

## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName
#   storageClass: myStorageClass

## Bitnami RabbitMQ image version
## ref: https://hub.docker.com/r/bitnami/rabbitmq/tags/
##
image:
  registry: docker.io
  repository: bitnami/rabbitmq
  tag: 3.8.9-debian-10-r0

  ## set to true if you would like to see extra information on logs
  ## it turns BASH and NAMI debugging in minideb
  ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
  ##
  debug: false

  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName

## String to partially override rabbitmq.fullname template (will maintain the release name)
##
# nameOverride:

## String to fully override rabbitmq.fullname template
##
# fullnameOverride:

## Kubernetes Cluster Domain
##
clusterDomain: cluster.local

## RabbitMQ Authentication parameters
##
auth:
  ## RabbitMQ application username
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  username: rabbitmq

  ## RabbitMQ application password
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  password: Qwe123.
  # existingPasswordSecret: name-of-existing-secret

  ## Erlang cookie to determine whether different nodes are allowed to communicate with each other
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  erlangCookie: SWQOKODSQALRPCLNMEQGM4MCSB
  # existingErlangSecret: name-of-existing-secret

  ## Enable encryption to rabbitmq
  ## ref: https://www.rabbitmq.com/ssl.html
  ##
  tls:
    enabled: false
    failIfNoPeerCert: true
    sslOptionsVerify: verify_peer
    caCertificate: |-
    serverCertificate: |-
    serverKey: |-
    # existingSecret: name-of-existing-secret-to-rabbitmq

## Value for the RABBITMQ_LOGS environment variable
## ref: https://www.rabbitmq.com/logging.html#log-file-location
##
logs: '-'

## RabbitMQ Max File Descriptors
## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
## ref: https://www.rabbitmq.com/install-debian.html#kernel-resource-limits
##
ulimitNofiles: '65536'

## RabbitMQ maximum available scheduler threads and online scheduler threads. By default it will create a thread per CPU detected, with the following parameters you can tune it manually.
## ref: https://hamidreza-s.github.io/erlang/scheduling/real-time/preemptive/migration/2016/02/09/erlang-scheduler-details.html#scheduler-threads
## ref: https://github.com/bitnami/charts/issues/2189
##
# maxAvailableSchedulers: 2
# onlineSchedulers: 1

## The memory threshold under which RabbitMQ will stop reading from client network sockets, in order to avoid being killed by the OS
## ref: https://www.rabbitmq.com/alarms.html
## ref: https://www.rabbitmq.com/memory.html#threshold
##
memoryHighWatermark:
  enabled: true
  ## Memory high watermark type. Either absolute or relative
  ##
  type: "relative"
  ## Memory high watermark value.
  ## The default value of 0.4 stands for 40% of availalbe RAM
  ## Note: the memory relative limit is applied to the resource.limits.memory to caculate the memory threshold
  ## You can also use an absolute value, e.g.: 256MB
  ##
  value: 0.4

## Plugins to enable
##
plugins: "rabbitmq_management rabbitmq_peer_discovery_k8s"

## Community plugins to download during container initialization.
## Combine it with extraPlugins to also enable them.
##
# communityPlugins:

## Extra plugins to enable
## Use this instead of `plugins` to add new plugins
##
extraPlugins: "rabbitmq_auth_backend_ldap"

## Clustering settings
##
clustering:
  addressType: hostname
  ## Rebalance master for queues in cluster when new replica is created
  ## ref: https://www.rabbitmq.com/rabbitmq-queues.8.html#rebalance
  ##
  rebalance: false

  ## forceBoot: executes 'rabbitmqctl force_boot' to force boot cluster shut down unexpectedly in an
  ## unknown order.
  ## ref: https://www.rabbitmq.com/rabbitmqctl.8.html#force_boot
  ##
  forceBoot: false

## Loading a RabbitMQ definitions file to configure RabbitMQ
##
loadDefinition:
  enabled: false
  ## Can be templated if needed, e.g.
  ## existingSecret: "{{ .Release.Name }}-load-definition"
  ##
  # existingSecret:

## Command and args for running the container (set to default if not set). Use array form
##
# command:
# args:

## Additional environment variables to set
## E.g:
## extraEnvVars:
##   - name: FOO
##     value: BAR
##
extraEnvVars: []

## ConfigMap with extra environment variables
##
# extraEnvVarsCM:

## Secret with extra environment variables
##
# extraEnvVarsSecret:

## Extra ports to be included in container spec, primarily informational
## E.g:
## extraContainerPorts:
## - name: new_port_name
##   containerPort: 1234
##
extraContainerPorts: []

## Configuration file content: required cluster configuration
## Do not override unless you know what you are doing.
## To add more configuration, use `extraConfiguration` of `advancedConfiguration` instead
##
configuration: |-
  ## Username and password
  default_user = {{ .Values.auth.username }}
  default_pass = CHANGEME
  ## Clustering
  cluster_formation.peer_discovery_backend  = rabbit_peer_discovery_k8s
  cluster_formation.k8s.host = kubernetes.default.svc.{{ .Values.clusterDomain }}
  cluster_formation.node_cleanup.interval = 10
  cluster_formation.node_cleanup.only_log_warning = true
  cluster_partition_handling = autoheal
  # queue master locator
  queue_master_locator = min-masters
  # enable guest user
  loopback_users.guest = false
  {{ tpl .Values.extraConfiguration . }}
  {{- if .Values.auth.tls.enabled }}
  ssl_options.verify = {{ .Values.auth.tls.sslOptionsVerify }}
  listeners.ssl.default = {{ .Values.service.tlsPort }}
  ssl_options.fail_if_no_peer_cert = {{ .Values.auth.tls.failIfNoPeerCert }}
  ssl_options.cacertfile = /opt/bitnami/rabbitmq/certs/ca_certificate.pem
  ssl_options.certfile = /opt/bitnami/rabbitmq/certs/server_certificate.pem
  ssl_options.keyfile = /opt/bitnami/rabbitmq/certs/server_key.pem
  {{- end }}
  {{- if .Values.ldap.enabled }}
  auth_backends.1 = rabbit_auth_backend_ldap
  auth_backends.2 = internal
  {{- range $index, $server := .Values.ldap.servers }}
  auth_ldap.servers.{{ add $index 1 }} = {{ $server }}
  {{- end }}  auth_ldap.port = {{ .Values.ldap.port }}
  auth_ldap.user_dn_pattern = {{ .Values.ldap.user_dn_pattern  }}
  {{- if .Values.ldap.tls.enabled }}
  auth_ldap.use_ssl = true
  {{- end }}
  {{- end }}
  {{- if .Values.metrics.enabled }}
  ## Prometheus metrics
  prometheus.tcp.port = 9419
  {{- end }}
  {{- if .Values.memoryHighWatermark.enabled }}
  ## Memory Threshold
  total_memory_available_override_value = {{ include "rabbitmq.toBytes" .Values.resources.limits.memory }}
  vm_memory_high_watermark.{{ .Values.memoryHighWatermark.type }} = {{ .Values.memoryHighWatermark.value }}
  {{- end }}

## Configuration file content: extra configuration
## Use this instead of `configuration` to add more configuration
##
extraConfiguration: |-
  #default_vhost = {{ .Release.Namespace }}-vhost
  #disk_free_limit.absolute = 50MB
  #load_definitions = /app/load_definition.json

## Configuration file content: advanced configuration
## Use this as additional configuraton in classic config format (Erlang term configuration format)
##
## If you set LDAP with TLS/SSL enabled and you are using self-signed certificates, uncomment these lines.
## advancedConfiguration: |-
##   [{
##     rabbitmq_auth_backend_ldap,
##     [{
##         ssl_options,
##         [{
##             verify, verify_none
##         }, {
##             fail_if_no_peer_cert,
##             false
##         }]
##     ]}
##   }].
##
advancedConfiguration: |-

## LDAP configuration
##
ldap:
  enabled: false
  ## List of LDAP servers hostnames
  ##
  servers: []
  ## LDAP servers port
  ##
  port: "389"
  ## Pattern used to translate the provided username into a value to be used for the LDAP bind
  ## ref: https://www.rabbitmq.com/ldap.html#usernames-and-dns
  ##
  user_dn_pattern: cn=${username},dc=example,dc=org
  tls:
    ## If you enabled TLS/SSL you can set advaced options using the advancedConfiguration parameter.
    ##
    enabled: false

## extraVolumes and extraVolumeMounts allows you to mount other volumes
## Examples:
## extraVolumeMounts:
##   - name: extras
##     mountPath: /usr/share/extras
##     readOnly: true
## extraVolumes:
##   - name: extras
##     emptyDir: {}
extraVolumeMounts: []
extraVolumes: []

## Optionally specify extra secrets to be created by the chart.
## This can be useful when combined with load_definitions to automatically create the secret containing the definitions to be loaded.
## Example:
## extraSecrets:
##   load-definition:
##     load_definition.json: |
##       {
##         ...
##       }
##
extraSecrets: {}

## Number of RabbitMQ replicas to deploy
##
replicaCount: 3

## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:

## RabbitMQ should be initialized one by one when building cluster for the first time.
## Therefore, the default value of podManagementPolicy is 'OrderedReady'
## Once the RabbitMQ participates in the cluster, it waits for a response from another
## RabbitMQ in the same cluster at reboot, except the last RabbitMQ of the same cluster.
## If the cluster exits gracefully, you do not need to change the podManagementPolicy
## because the first RabbitMQ of the statefulset always will be last of the cluster.
## However if the last RabbitMQ of the cluster is not the first RabbitMQ due to a failure,
## you must change podManagementPolicy to 'Parallel'.
## ref : https://www.rabbitmq.com/clustering.html#restarting
##
podManagementPolicy: OrderedReady

## Pod labels. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}

## Pod annotations. Evaluated as a template
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}

## updateStrategy for RabbitMQ statefulset
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
##
updateStrategyType: RollingUpdate

## Name of the priority class to be used by RabbitMQ pods, priority class needs to be created beforehand
## Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""

## Affinity for pod assignment. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}

## Node labels for pod assignment. Evaluated as a template
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}

## Tolerations for pod assignment. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

## RabbitMQ pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
##
podSecurityContext:
  fsGroup: 1001
  runAsUser: 1001

## RabbitMQ containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## Example:
##   containerSecurityContext:
##     capabilities:
##       drop: ["NET_RAW"]
##     readOnlyRootFilesystem: true
##
containerSecurityContext: {}

## RabbitMQ containers' resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  limits:
    cpu: 1000m
    memory: 2Gi
  requests:
    cpu: 1000m
    memory: 2Gi

## RabbitMQ containers' liveness and readiness probes.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
  enabled: true
  initialDelaySeconds: 120
  timeoutSeconds: 20
  periodSeconds: 30
  failureThreshold: 3
  successThreshold: 1
readinessProbe:
  enabled: true
  initialDelaySeconds: 10
  timeoutSeconds: 20
  periodSeconds: 30
  failureThreshold: 3
  successThreshold: 1

## Custom Liveness probe
##
customLivenessProbe: {}

## Custom Rediness probe
##
customReadinessProbe: {}

## Add init containers to the pod
## Example:
## initContainers:
##   - name: your-image-name
##     image: your-image
##     imagePullPolicy: Always
##     ports:
##       - name: portname
##         containerPort: 1234
##
initContainers: {}

## Add sidecars to the pod.
## Example:
## sidecars:
##   - name: your-image-name
##     image: your-image
##     imagePullPolicy: Always
##     ports:
##       - name: portname
##         containerPort: 1234
##
sidecars: {}

## RabbitMQ pods ServiceAccount
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
  ## Specifies whether a ServiceAccount should be created
  ##
  create: true
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the rabbitmq.fullname template
  ##
  # name:

## Role Based Access
## ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
  ## Specifies whether RBAC rules should be created
  ## binding RabbitMQ ServiceAccount to a role
  ## that allows RabbitMQ pods querying the K8s API
  ##
  create: true

persistence:
  ## this enables PVC templates that will create one per pod
  ##
  enabled: true

  ## rabbitmq data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "rook-cephfs"
  ## selector can be used to match an existing PersistentVolume
  ## selector:
  ##   matchLabels:
  ##     app: my-app
  selector: {}
  accessMode: ReadWriteMany

  ## Existing PersistentVolumeClaims
  ## The value is evaluated as a template
  ## So, for example, the name can depend on .Release or .Chart
  # existingClaim: ""

  ## If you change this value, you might have to adjust `rabbitmq.diskFreeLimit` as well.
  ##
  size: 8Gi

## Pod Disruption Budget configuration
## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
##
pdb:
  create: true
  ## Min number of pods that must still be available after the eviction
  ##
  minAvailable: 1
  ## Max number of pods that can be unavailable after the eviction
  ##
  # maxUnavailable: 1

## Network Policy configuration
## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
##
networkPolicy:
  ## Enable creation of NetworkPolicy resources
  ##
  enabled: true
  ## The Policy model to apply. When set to false, only pods with the correct
  ## client label will have network access to the ports RabbitMQ is listening
  ## on. When true, RabbitMQ will accept connections from any source
  ## (with the correct destination port).
  ##
  allowExternal: true
  ## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed.
  ##
  # additionalRules:
  #  - matchLabels:
  #    - role: frontend
  #  - matchExpressions:
  #    - key: role
  #      operator: In
  #      values:
  #        - frontend

## Kubernetes service type
service:
  type: ClusterIP
  ## Amqp port
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  port: 5672

  ## Amqp Tls port
  ##
  tlsPort: 5671

  ## Node port
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  # nodePort: 30672

  ## Node port Tls
  ##
  # tlsNodePort: 30671

  ## Dist port
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  distPort: 25672

  ## Node port (Manager)
  ##
  # distNodePort: 30676

  ## RabbitMQ Manager port
  ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables
  ##
  managerPort: 15672

  ## Node port (Manager)
  ##
  # managerNodePort: 30673

  ## RabbitMQ Prometheues metrics port
  ##
  metricsPort: 9419

  ## Node port for metrics
  ##
  # metricsNodePort: 30674

  ## Node port for EPMD Discovery
  ##
  # epmdNodePort: 30675

  ## Extra ports to expose
  ## E.g.:
  ## extraPorts:
  ## - name: new_svc_name
  ##   port: 1234
  ##   targetPort: 1234
  ##
  extraPorts: []

  ## Load Balancer sources
  ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
  ##
  # loadBalancerSourceRanges:
  # - 10.10.10.0/24

  ## Set the ExternalIPs
  ##
  externalIPs:
    - 172.17.27.130

  ## Set the LoadBalancerIP
  ##
  # loadBalancerIP:

  ## Service labels. Evaluated as a template
  ##
  labels: {}

  ## Service annotations. Evaluated as a template
  ## Example:
  ## annotations:
  ##   service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
  ##
  annotations: {}

## Configure the ingress resource that allows you to access the
## RabbitMQ installation. Set up the URL
## ref: http://kubernetes.io/docs/user-guide/ingress/
##
ingress:
  ## Set to true to enable ingress record generation
  ##
  enabled: true

  ## Path for the default host
  ##
  path: /

  ## Set this to true in order to add the corresponding annotations for cert-manager
  ##
  certManager: false

  ## When the ingress is enabled, a host pointing to this will be created
  ##
  hostname: rabbit.csb.gov.tr

  ## Ingress annotations done as key:value pairs
  ## For a full list of possible ingress annotations, please see
  ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md
  ##
  ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
  ##
  annotations: {}

  ## Enable TLS configuration for the hostname defined at ingress.hostname parameter
  ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }}
  ## or a custom one if you use the tls.existingSecret parameter
  ## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it
  ##
  tls: false
  ## existingSecret: name-of-existing-secret

  ## The list of additional hostnames to be covered with this ingress record.
  ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array
  ## extraHosts:
  ## - name: rabbitmq.local
  ##   path: /
  ##

  ## The tls configuration for additional hostnames to be covered with this ingress record.
  ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
  ## extraTls:
  ## - hosts:
  ##     - rabbitmq.local
  ##   secretName: rabbitmq.local-tls
  ##

  ## If you're providing your own certificates, please use this to add the certificates as secrets
  ## key and certificate should start with -----BEGIN CERTIFICATE----- or
  ## -----BEGIN RSA PRIVATE KEY-----
  ##
  ## name should line up with a tlsSecret set further up
  ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
  ##
  ## It is also possible to create and manage the certificates outside of this helm chart
  ## Please see README.md for more information
  ##
  secrets: []
  ## - name: rabbitmq.local-tls
  ##   key:
  ##   certificate:
  ##


## Prometheus Metrics
##
metrics:
  enabled: true
  plugins: "rabbitmq_prometheus"
  ## Prometheus pod annotations
  ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
  ##
  podAnnotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "{{ .Values.service.metricsPort }}"

  ## Prometheus Service Monitor
  ## ref: https://github.com/coreos/prometheus-operator
  ##
  serviceMonitor:
    ## If the operator is installed in your cluster, set to true to create a Service Monitor Entry
    ##
    enabled: false
    ## Specify the namespace in which the serviceMonitor resource will be created
    ##
    # namespace: ""
    ## Specify the interval at which metrics should be scraped
    ##
    interval: 30s
    ## Specify the timeout after which the scrape is ended
    ##
    # scrapeTimeout: 30s
    ## Specify Metric Relabellings to add to the scrape endpoint
    ##
    # relabellings:
    ## Specify honorLabels parameter to add the scrape endpoint
    ##
    honorLabels: false
    ## Specify the release for ServiceMonitor. Sometimes it should be custom for prometheus operator to work
    ##
    # release: ""
    ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
    ##
    additionalLabels: {}

  ## Custom PrometheusRule to be defined
  ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart
  ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions
  ##
  prometheusRule:
    enabled: false
    additionalLabels: {}
    namespace: ""
    ## List of rules, used as template by Helm.
    ## These are just examples rules inspired from https://awesome-prometheus-alerts.grep.to/rules.html
    # rules:
    #   - alert: RabbitmqDown
    #     expr: rabbitmq_up{service="{{ template "rabbitmq.fullname" . }}"} == 0
    #     for: 5m
    #     labels:
    #       severity: error
    #     annotations:
    #       summary: Rabbitmq down (instance {{ "{{ $labels.instance }}" }})
    #       description: RabbitMQ node down
    #   - alert: ClusterDown
    #     expr: |
    #       sum(rabbitmq_running{service="{{ template "rabbitmq.fullname" . }}"})
    #       < {{ .Values.replicaCount }}
    #     for: 5m
    #     labels:
    #       severity: error
    #     annotations:
    #       summary: Cluster down (instance {{ "{{ $labels.instance }}" }})
    #       description: |
    #           Less than {{ .Values.replicaCount }} nodes running in RabbitMQ cluster
    #           VALUE = {{ "{{ $value }}" }}
    #   - alert: ClusterPartition
    #     expr: rabbitmq_partitions{service="{{ template "rabbitmq.fullname" . }}"} > 0
    #     for: 5m
    #     labels:
    #       severity: error
    #     annotations:
    #       summary: Cluster partition (instance {{ "{{ $labels.instance }}" }})
    #       description: |
    #           Cluster partition
    #           VALUE = {{ "{{ $value }}" }}
    #   - alert: OutOfMemory
    #     expr: |
    #       rabbitmq_node_mem_used{service="{{ template "rabbitmq.fullname" . }}"}
    #       / rabbitmq_node_mem_limit{service="{{ template "rabbitmq.fullname" . }}"}
    #       * 100 > 90
    #     for: 5m
    #     labels:
    #       severity: warning
    #     annotations:
    #       summary: Out of memory (instance {{ "{{ $labels.instance }}" }})
    #       description: |
    #           Memory available for RabbmitMQ is low (< 10%)\n  VALUE = {{ "{{ $value }}" }}
    #           LABELS: {{ "{{ $labels }}" }}
    #   - alert: TooManyConnections
    #     expr: rabbitmq_connectionsTotal{service="{{ template "rabbitmq.fullname" . }}"} > 1000
    #     for: 5m
    #     labels:
    #       severity: warning
    #     annotations:
    #       summary: Too many connections (instance {{ "{{ $labels.instance }}" }})
    #       description: |
    #           RabbitMQ instance has too many connections (> 1000)
    #           VALUE = {{ "{{ $value }}" }}\n  LABELS: {{ "{{ $labels }}" }}
    rules: []

## Init Container paramaters
## Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each component
## values from the securityContext section of the component
##
volumePermissions:
  enabled: false
  ## Bitnami Minideb image
  ## ref: https://hub.docker.com/r/bitnami/minideb/tags/
  ##
  image:
    registry: docker.io
    repository: bitnami/minideb
    tag: buster
    ## Specify a imagePullPolicy
    ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
    ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
    ##
    pullPolicy: Always
    ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## Example:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
  ## Init Container resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    limits: {}
    #   cpu: 100m
    #   memory: 128Mi
    requests: {}
    #   cpu: 100m
    #   memory: 128Mi

kubectl describe pod rabbitmq-0 : kubectl describe pod rabbitmq-0

kubectl get pv kubectl get pv

kubectl get pvc kubectl get pvc

kubectl get sc kubectl get sc

Lastly here is my "lsblk -f" run command one node: lsblk -f


Solution

  • I have used old cluster.yaml file and added 'allowUninstallWithVolumes: false' under cleanupPolicy. That solves everything.