amazon-ekshorizontal-pod-autoscalingkedametrics-server

Why isn't Keda's Horizontal Pod Autoscaling (HPA) collecting CPU/Memory metrics in AWS Elastic Kubernetes Service (EKS)?


I ran into this problem and solved it, so this Q&A is here just in case somebody else spent time on sifting through AWS, Keda, and/or Kubernetes' docs trying to deduce this answer

My team deployed Keda with the goal of horizontally autoscaling pods in Kubernetes based on Redis queue length, CPU utilization, and memory utilization. Post-deployment, we noticed the horizontal pod autoscaler as viewed in ArgoCD was throwing the error:

unable to get metrics for resource memory: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)

Some other errors we saw:

$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1"
Error from server (NotFound): the server could not find the requested resource
$ kubectl top nodes
error: Metrics API not available

Of course, these errors are in addition to the human-visible issue of pods not autoscaling when CPU/Memory utilization thresholds are reached.

This occurs despite the redis queue pod scaling as expected via Keda/HPA.

What can we do to make sure the CPU and Memory utilization causes scaling as expected?


Solution

  • As it turns out, this is caused by:

    To resolve this, we installed metrics-server via its Helm chart to our cluster.