azurekubernetesazure-aks

KubeDB provisioner and azure workload identity


So KubeDB provisioner is responsible to pull images from the registry. In our AKS cluster this would be our private registry, which is configured to only accept Azure Active Directory authentication - one cannot use user/password authentication.

So we setup federated identity credentials for the namespace service account and we have the respective Azure AD Identity associated with it. The framework that allows this is Azure Workload Identity.

Assuming the federated identity credentials and the azure identity itself already exist, the following steps must be taken to enable it for the KubeDB provisioner pod:

  1. The KubeDB provisioner pod must have the label azure.workload.identity/use: "true"
  2. The KubeDB namespace service account must have the name recorded in the federated identity credentials subject
  3. The KubeDB namespace service account must have the annotation azure.workload.identity/client-id: <azure identity client id>

Now, the service account name is not an issue - we can create the federated identity credentials with any name as needed. Custom annotation is not a problem either, since the kubedb-provisioner chart allows to customize annotations.

The problem is the custom label. It seems that KubeDB charts make a deliberate decision NOT to allow custom labels on the objects they create.

I am sure there is some K8s best practice behind it, but what can be done when it is absolutely necessary to add a label to a KubeDB pod?

To complicate things a little bit more, we deploy all the apps using ArgoCD. So if I use some kind of out-of-band "labeller" (do not know if that exists at all) it would be at odds with ArgoCD. I will have to instruct the latter to ignore the differences in labels, which is doable, but not ideal.

My question - what is the best way to add custom labels to the KubeDB pods given that their helm charts do not support it and that we use ArgoCD to deploy it? (Besides creating a fork, that is).

EDIT 1

The KubeDB charts - https://github.com/kubedb/installer/tree/v2024.8.21/charts

I am not hooked on KubeDB. I need it for Redis. So advice on alternative Redis operators is welcome as well.


Solution

  • As I mentioned in comments, there are few ways to get this done. One of them being Kubernetes Mutating Admission Webhook allows you to automatically inject labels, annotations, or other configurations into Kubernetes objects when they are created or updated.

    webhooks require HTTPS, so you need a certificate.

    openssl req -newkey rsa:2048 -nodes -keyout webhook-server.key -x509 -days 365 -out webhook-server.crt -subj "/CN=webhook-server.webhook-system.svc"
    

    followed by a secret to store the TLS Certificate and Key

    kubectl -n webhook-system create secret tls webhook-server-tls --cert=webhook-server.crt --key=webhook-server.key
    

    enter image description here

    Deploy the Mutating Webhook Server (I am using a simple Python-based webhook server that adds the azure.workload.identity/use: "true" label to KubeDB pods)

    deployment for the same-

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: mutating-webhook
      namespace: webhook-system
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: mutating-webhook
      template:
        metadata:
          labels:
            app: mutating-webhook
        spec:
          containers:
          - name: mutating-webhook
            image: ghcr.io/open-policy-agent/opa:0.40.0
            args:
              - "run"
              - "--server"
              - "--addr=0.0.0.0:443"
              - "--tls-cert-file=/certs/webhook-server.crt"
              - "--tls-private-key-file=/certs/webhook-server.key"
            volumeMounts:
              - name: certs
                mountPath: "/certs"
                readOnly: true
          volumes:
          - name: certs
            secret:
              secretName: webhook-server-tls
    

    Service for the Webhook

    apiVersion: v1
    kind: Service
    metadata:
      name: mutating-webhook
      namespace: webhook-system
    spec:
      ports:
      - port: 443
        targetPort: 443
      selector:
        app: mutating-webhook
    

    enter image description here

    and finally create the MutatingWebhookConfiguration that tells k8s to call your webhook server when a KubeDB pod is created.

    apiVersion: admissionregistration.k8s.io/v1
    kind: MutatingWebhookConfiguration
    metadata:
      name: mutating-webhook-configuration
    webhooks:
      - name: mutating-webhook.webhook-system.svc
        clientConfig:
          service:
            name: mutating-webhook
            namespace: webhook-system
            path: "/mutate"
          caBundle: <BASE64_ENCODED_CA_CERT>
        rules:
          - operations: ["CREATE"]
            apiGroups: [""]
            apiVersions: ["v1"]
            resources: ["pods"]
            scope: "Namespaced"
        failurePolicy: Fail
        admissionReviewVersions: ["v1"]
        matchPolicy: Equivalent
        objectSelector:
          matchExpressions:
            - key: app.kubernetes.io/name
              operator: In
              values: ["kubedb-provisioner"]
        namespaceSelector:
          matchExpressions:
            - key: kubernetes.io/metadata.name
              operator: In
              values: ["default"]
        sideEffects: None
    

    enter image description here

    This CaBundle value you will get if you do-

    cat webhook-server.crt | base64 | tr -d '\n'
    

    enter image description here

    Now we have the Mutating Admission Webhook configured, just need to ensure that the webhook server logic is correctly implemented to handle the mutation.

    For example if we use a python server that mutates incoming pod creation requests by adding the azure.workload.identity/use: "true" label and in order to use the same inside our k8s cluster we can create a dockerfile image from the same

    FROM python:3.9-slim
    
    RUN pip install Flask
    
    COPY webhook.py /app/webhook.py
    COPY webhook-server.crt /certs/webhook-server.crt
    COPY webhook-server.key /certs/webhook-server.key
    
    WORKDIR /app
    
    CMD ["python", "webhook.py"]
    
    

    enter image description here

    and update the mutating-webhook-deployment.yaml with that image

    enter image description here

    Reference-

    Official K8s doc on Mutating webhook

    Example on mutating webhook