azurekubernetesazure-akskyverno

Kyverno admission controller pod stuck on init (aks)


I'm trying to install Kyverno using the guide here:

https://kyverno.io/docs/installation/ https://nirmata.com/2023/10/13/simplify-image-verification-with-kyverno-and-azure-ad-workload-identity-on-aks/

However after successfully installing the helm chart the admission controller pods are perpetually hung up on init

kyverno-admission-controller-6cc87c69c7-d5fgm   0/1     Init:0/1   0          115s
kyverno-admission-controller-6cc87c69c7-mkdx2   0/1     Init:0/1   0          115s
kyverno-admission-controller-6cc87c69c7-zjlqr   0/1     Init:0/1   0          115s
kyverno-background-controller-bc589cf8c-kqw94   1/1     Running    0          115s
kyverno-background-controller-bc589cf8c-mxwrc   1/1     Running    0          115s
kyverno-cleanup-controller-5c9fdbbc5c-4gwxb     0/1     Running    0          115s
kyverno-cleanup-controller-5c9fdbbc5c-nmnr6     0/1     Running    0          115s
kyverno-reports-controller-6cc9fd6979-6s5qv     1/1     Running    0          115s
kyverno-reports-controller-6cc9fd6979-tbvx2     1/1     Running    0          115s

The official documentation says I have to add an annotation on AKS workload due to the Admission enforcer fighting with Kyverno and removing the webhooks.

https://kyverno.io/docs/installation/platform-notes/#notes-for-aks-users https://learn.microsoft.com/en-us/azure/aks/faq#can-admission-controller-webhooks-impact-kube-system-and-internal-aks-namespaces

Wiped the board clean and reinstalled, but this time with a values. yaml specifically containing the annotation to be passed through

config:
  webhookAnnotations:    
admissions.enforcer/disabled: "true"
admissionController:
  rbac:
    serviceAccount:
      annotations: 
        azure.workload.identity/client-id: ****
  podLabels:
    azure.workload.identity/use: "true"

and running

helm install kyverno kyverno/kyverno \
--namespace kyverno \
--set admissionController.replicas=3 \
--set backgroundController.replicas=2 \
--set cleanupController.replicas=2 \
--set reportsController.replicas=2 \
-f values.yaml

Stil the same issue, any idea what im missing or where to dig deeper to get these pods functional?

EDIT:

Getting logs from the init container and it looks like its having trouble communiticating with the kubernetes api and referencing objects within its own namespace

W1017 01:35:47.048643       1 reflector.go:533] k8s.io/client-go@v0.27.1/tools/cache/reflector.go:231: failed to list *v1.ConfigMap: Get "https://10.0.0.1:443/api/v1/namespaces/kyverno/configmaps?fieldSelector=metadata.name%3Dkyverno-metrics&limit=500&resourceVersion=0": EOF

Solution

  • Networking Issue

    Checked the container logs of the pods and saw that it could not reference its own config map within its namespace due to an inability for it to communicate with kubernetes api server.

    I caught it in the firewall with the SNI TLS exception, the target subnet was not in any of the route tables so that internal traffic was attempting to egress and hit the firewall, hence the SNI error that prevented the admission controller pods from running