I have several cronjobs, that require quite a lot of both CPU and RAM, but it only runs once a day for a few minutes (scraping data from websites). So, my question is, can I automatically scale down some of my other deployments while this jobs are executed, and then scale them back up?
I can now imagine only one way to do that: to use a k8s client inside my cronjob code, that scales other deployments at the startup and before finishing, but that seems weird and possibly unsafe.
Is there any "proper Kubernetes way", or any best practices, to do that instead? The best option, if it is possible, would be to define some minimal amount of alive replicas for all deployments, and leave the rest decisions (how much replicas of different deployments to scale) to k8s.
You have not given the following info
Based on that you could you could do either of the below
Use Pod Priority Preemption to give the cronjobs higher than some low priority deployments. K8s will preempt lower-priority pods if necessary to free up resources. If a high-priority pod cannot be scheduled because there aren’t enough available resources (CPU, memory, etc.), Kubernetes will try to free up resources by preempting lower-priority pods.
You can use Node Autoscaling to add or remove nodes based on the overall resource demand and use Taints and Tolerations to dedicate the newly created nods to resource-intensive cronjobs.
Use Horizontal Pod Autoscaler (HPA) on your deployments to automatically scale the number of pods based on CPU and memory usage