I currently have a cluster on EKS in Auto Mode and I'm doing some initial tests.
I want to deploy KEDA to perform horizontal scale, but since it's a critical piece of the infrastructure, it should be in the system
nodepool, where critical stuff lives.
Accordingly to this page of documentation https://docs.aws.amazon.com/eks/latest/userguide/critical-workload.html I should add these lines to my deployment:
spec:
nodeSelector:
karpenter.sh/nodepool: system
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
However, since I'm not very experienced, I can't find a way to pass those info while installing KEDA using Helm. I tried to put those lines in a file and then run the following command:
helm upgrade --install keda kedacore/keda --namespace keda --create-namespace -f config.yaml --wait
but regarding all my tests, KEDA is always deployed on a new instance, not on the "main" one.
Any suggestions?
Your config.yaml can hold the configuration that you need.
There are usually 2 ways of figuring out how to configure things in helm charts:
In this case, I found the relevant values file (I looked at the main branch but you could have a look at a specific tag)
In this file, there is this part:
# -- Node selector for pod scheduling ([docs](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/))
nodeSelector: {}
# -- Tolerations for pod scheduling ([docs](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/))
tolerations: []
which means that if you add the following lines to your config.yaml
nodeSelector:
karpenter.sh/nodepool: system
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
you should be able to pass this configuration, that you need, to your deployment.