I've been learning kubernetes on GKE lately. I'm testing CronJob, and I'm having trouble with the behavior there.
Below is overview of node-pool, cron job settings.
node-pool: 1 node (remaining allocable memory is 1Gi)
cronjob: Simple busybox cron for test
So, first of all, I requested the cronjob's requests memory to exceed the node's allowed amount of memory.
After apply this, Of course pod can't schedule on and go to be pending
status.
requests:
memory: 5Gi ## (unbelievable amount !!)
Then fix requests to a sensible value and apply, delete pending pod!
requests:
memory: 10Mi ## (looks good !!)
But here's the problem.
Even if a pod is deleted, a pod inheriting the previous (5Gi memory request) configuration is immediately created and becomes pending again.
Removing the CronJob itself would solve the problem, but is there any way to modify the pod itself or is there any other clever way to do it?
If a pod is deleted, the next generated pod should have the latest yaml settings applied. It is also strange that CronJob immediately tries to generate a pod with the same settings when a pod is deleted. I want to know why and any good solutions.
apiVersion: batch/v1
kind: CronJob
metadata:
name: example-job
namespace: default
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
nodeSelector:
cloud.google.com/gke-nodepool: example-node-pool
containers:
- name: hello-world
image: busybox
resources:
requests:
memory: 5Gi # fixed to 10Mi
command:
[
"/bin/sh",
"-c",
"echo 'Hello, World!'"
]
restartPolicy: Never
There's one intermediate layer in this setup. A CronJob doesn't directly create a Pod; instead, a CronJob creates Job objects periodically, and the Job is responsible for creating the Pod. If you change the CronJob, it doesn't change the existing Job at all.
So, in your example, the CronJob might have created a Job example-job-1710681600
. That Job itself is immutable. If you delete the corresponding Pod, the Job will recreate it, following the Pod spec embedded in the Job.
If you delete the Job, it won't get recreated (its time is past), but it will delete the corresponding Pod as well. The next time the CronJob is scheduled to run, you'll get a new Job, following the new Job template embedded in the updated CronJob.