I have a HorizontalPodAutoscalar to scale my pods based on CPU. The minReplicas here is set to 5
:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-web
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-web
minReplicas: 5
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
I've then added Cron jobs to scale up/down my horizontal pod autoscaler based on time of day:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: production
name: cron-runner
rules:
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["patch", "get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: cron-runner
namespace: production
subjects:
- kind: ServiceAccount
name: sa-cron-runner
namespace: production
roleRef:
kind: Role
name: cron-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-cron-runner
namespace: production
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: django-scale-up-job
namespace: production
spec:
schedule: "56 11 * * 1-6"
successfulJobsHistoryLimit: 0 # Remove after successful completion
failedJobsHistoryLimit: 1 # Retain failed so that we see it
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: django-scale-up-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":8}}'
restartPolicy: OnFailure
----
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: django-scale-down-job
namespace: production
spec:
schedule: "30 20 * * 1-6"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 0 # Remove after successful completion
failedJobsHistoryLimit: 1 # Retain failed so that we see it
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: django-scale-down-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":5}}'
restartPolicy: OnFailure
This works really well, except that now when I deploy it overwrites this minReplicas
value with the minReplicas in the HorizontalPodAutoscaler spec (in my case, this is set to 5)
I'm deploying my HPA using kubectl apply -f ~/autoscale.yaml
Is there a way of handling this situation? Do I need to create some kind of shared logic so that my deployment scripts can work out what the minReplicas value should be? Or is there a simpler way of handling this?
I think you could also consider the following two options:
The main idea behind this solution is to query the state of specific cluster resource (here HPA
) before trying to create/recreate it with helm
install
/upgrade
commands.
I mean to check the current minReplicas
value each time before you upgrade your application stack.
HPA
resource separately to application manifest filesHere you can handover this task to a dedicated HPA
operator, which can coexist with your CronJobs
that adjust minReplicas
according specific schedule: