kubernetesrabbitmqkubectlkubernetes-statefulsetrabbitmqctl

How rabbitmq pod traffic are handled in kubernetes when running single node with 3 replicas?


I have rabbitmq running in single node with 3 replicas. Issue is queue is not reflecting properly so I have forcefully forwarded the traffic to one pod. It's not good way when we have to update the rabbitmq and some situations where application gets connected to other nodes.

rabbitmq.yml (statefulset)

---
apiVersion: v1
kind: Namespace
metadata:
  name: rabbitmq-test
  labels:
    name: rabbitmq-test
---
apiVersion: v1
kind: Service
metadata:
  name: rabbitmq
  namespace: rabbitmq-test
  labels:
    app: rabbitmq
spec:
  type: NodePort
  ports:
    - name: amqp
      nodePort: 30000
      port: 5672
      protocol: TCP
      targetPort: 5672
    - name: management
      nodePort: 30001
      port: 15672
      protocol: TCP
      targetPort: 15672
  selector:
    app: rabbitmq
    statefulset.kubernetes.io/pod-name: rabbitmq-0
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: rabbitmq
  namespace: rabbitmq-test
spec:
  selector:
    matchLabels:
      app: rabbitmq
  serviceName: "rabbitmq"
  minReadySeconds: 10
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: rabbitmq
    spec:
      volumes:
        - name: rabbitmq-storage
          persistentVolumeClaim:
            claimName: rabbitmq-pvc
      terminationGracePeriodSeconds: 10
      containers:
        - name: rabbitmq
          image: rabbitmq:3.11.3-management
          lifecycle:
            postStart:
              exec:
                command: ["/bin/sh", "-c", "cp /mnt/data/test/rabbitmq_delayed_message_exchange-3.11.1.ez /opt/rabbitmq/plugins/ && rabbitmq-plugins --offline enable rabbitmq_peer_discovery_k8s rabbitmq_delayed_message_exchange"]
          imagePullPolicy: Always
          env:
            - name: RABBITMQ_DEFAULT_USER
              value: ""
            - name: RABBITMQ_DEFAULT_PASS
              value: ""
            - name: RABBITMQ_DEFAULT_VHOST
              value: ""
          ports:
            - name: amqp
              containerPort: 5672
            - name: management
              containerPort: 15672
          volumeMounts:
            - mountPath: "/mnt/data/test"
              name: rabbitmq-storage
          resources:
            requests:
              cpu: 500m
              memory: 256Mi
            limits:
              cpu: 1000m
              memory: 512Mi
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: rabbitmq-hpa
  namespace: rabbitmq-test
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: StatefulSet
    name: rabbitmq
  minReplicas: 3
  maxReplicas: 5
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 50
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 60
  behavior:
    scaleDown:
      policies:
        - type: Pods
          value: 2
          periodSeconds: 60
        - type: Percent
          value: 5
          periodSeconds: 60

Solution

  • It looks like your service selector is wrong. You are targeting pods with statefulset.kubernetes.io/pod-name: rabbitmq-0.

    But on a statefulset only the first pod will be named rabbitmq-0 the next ones will be named rabbitmq-1, rabbitmq-2, .... If your service is intended to forward traffic to all the pods in your STS then your service label selector should match the label selector of your STS.