kuberneteskeycloakkubernetes-helmazure-aks

AKS - Running containers as root user should be avoided


I could see the following Recommendations in Microsoft Defender for Cloud

enter image description here

Affected Resources:

enter image description here

however, I have set the following configuration in Keycloak

settings = {
  // ... other settings ...
  "containerSecurityContext.enabled"                 = "true"
  "containerSecurityContext.runAsUser"               = "1001"
  "containerSecurityContext.runAsNonRoot"            = "true"
  "containerSecurityContext.allowPrivilegeEscalation"= "false"
  // ... other settings ...
}

Why does it still think that keycloak Pod is running as a root user? How do I fix this?

Update - May 6th 2024:

% kubectl get pod keycloak-0 -n keycloak -o jsonpath='{.spec.containers[*].securityContext}'
{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"privileged":false,"readOnlyRootFilesystem":false,"runAsGroup":1001,"runAsNonRoot":true,"runAsUser":1001,"seccompProfile":{"type":"RuntimeDefault"}}%                                                                                                                           

% kubectl get pod keycloak-0 -n keycloak -o jsonpath='{.spec.securityContext}'
{"fsGroup":1001,"fsGroupChangePolicy":"Always"}%                                                                                                                         

% kubectl exec -it keycloak-0 -n keycloak -- sh
Defaulted container "keycloak" out of: keycloak, init-quarkus-directory (init)
$ id
uid=1001(keycloak) gid=1001 groups=1001

Solution

  • The issue you're encountering with Microsoft Defender for Cloud still flagging your Keycloak pod as running as a root user, despite having configured it to run as a non-root user, can be caused by several factors

    Double-check that the security context settings are properly applied and reflected at both the pod and container levels

    kubectl get pod -l app=keycloak -o jsonpath='{.items[*].spec.securityContext} {.items[*].spec.containers[*].securityContext}'
    

    This command will show the security context settings for both the pod and its containers. Confirm that runAsUser, runAsNonRoot, and allowPrivilegeEscalation settings are correctly reflected.

    Output of the above command, showing that runAsRoot is set and runAsUser to 1001, as well as allowPrivilegeEscalation is false

    Check if there are any cluster-wide policies or specific configurations in your AKS setup that might override the security context specified in your deployment. Review any Pod Security Policies (PSP) or other admission controllers that might be enforcing different rules. Check for any Role-Based Access Control (RBAC) settings or security policies that might influence pod execution behaviors.

    Sometimes, the security recommendations may not update immediately after changes. Force an update or re-scan in Microsoft Defender for Cloud, if possible, to ensure it reflects the current state of your cluster.

    I have a feeling that your Keycloak pod is configured to run with a non-root user ID (1001), but the group ID (gid) remains at 0 (root). This might cause some security tools or policies to still flag the container because the group is root, even though the user is not. You can review the Azure Policy behind a Defender Recommendation for more detail.

    Run the below command to check the status of the Pod:

    kubectl exec -it $(kubectl get pod -l app=keycloak -o jsonpath="{.items[0].metadata.name}") -- /bin/bash
    

    Output of the above command showing a UID of 1001, but GID of 0 (root)

    If you see something like this then to address this and ensure both the user and group IDs are set to non-root values, you need to adjust your Kubernetes deployment configuration. Here’s how you can modify it to include the runAsGroup directive, which wasn't previously set based on your output.

    Update your deployment YAML to explicitly set both runAsUser and runAsGroup to non-root values.

    Example

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: keycloak
      labels:
        app: keycloak
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: keycloak
      template:
        metadata:
          labels:
            app: keycloak
        spec:
          securityContext:
            runAsUser: 1001
            runAsGroup: 1001  # Set GID to a non-root value
            runAsNonRoot: true
            allowPrivilegeEscalation: false
          containers:
          - name: keycloak
            image: quay.io/keycloak/keycloak:15.0.2
            imagePullPolicy: Always
            ports:
            - containerPort: 8080
            securityContext:
              runAsUser: 1001
              runAsGroup: 1001  # Ensure the container also enforces this GID
              runAsNonRoot: true
              allowPrivilegeEscalation: false
    

    you can apply the same using- kubectl apply -f keycloak-deployment.yaml and verify.

    Output of the above command, showing the successful restart of the deployment

    Searching the Defender CSPM for "keycloak" doesn't turn up any warnings Neither does searching for "running containers as root"

    Did not get any running container as root related notification.