kubernetescronkubernetes-helmschedule

CronJob Kubernets(v1.29.4): Automatic Trigger of CronJob after settings change


I'm currently facing an issue using helm. I've written a value template that contains a CroneJob, as follows:

...
custom_pipeline:
  schedule: "0 21 * * *" 
  steps:
    - name: test-cj
      volume: True # True or False mount in "/app/persistent"
      mountPath: "/app/ccc/"
      image: test_cj
      tag: "0.0.0"
      env:
      - name: DEBUG
        valueFrom:
          configMapKeyRef:
            name: test-cj-config
            key: DEBUG
...

When i make changes to the values of the CroneJob (custom_pipeline) and I apply them with:

helm upgrade --install test-cj . --values=./helm_values/values_test_cj.yml -n test-cj

The CroneJob is automatically triggered even if i have specified a fixed schedule.

Changed the restartPolicy settings, it didn't worked.

The current Helm template used is as follows:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: {{ .Release.Name  }}-custom-pipeline
  namespace: {{ .Release.Namespace }}
spec:
  schedule: {{ .Values.custom_pipeline.schedule | default "0 0 * * 1" | quote }}
  concurrencyPolicy: {{ .Values.cron_job_concurrencyPolicy | default "Allow" | quote }}
  successfulJobsHistoryLimit: 1
  failedJobsHistoryLimit: 3
  jobTemplate:
    spec:
      parallelism: 1
      template:
        spec:
          volumes:
          {{- range $index, $step := .Values.custom_pipeline.steps }}
          {{- if or $step.volume (not (eq ($step.volume | toString) "<nil>")) }}
            - name: {{ $.Release.Name }}-{{ $step.name }}-data
              persistentVolumeClaim:
                claimName: {{ $.Release.Name }}-{{ $step.name }}-data-claim
          {{- end }}
          {{- end }}
          {{- if or .Values.imagePullSecrets (eq (.Values.imagePullSecrets | toString) "<nil>") }}
          imagePullSecrets:
          - name: regcred
          {{- end }}
          hostNetwork: true
          dnsPolicy: ClusterFirstWithHostNet
          initContainers:
          {{- range $index, $step := .Values.custom_pipeline.steps }}
            - name: {{ $.Release.Name  }}-{{ $step.name }}
              image: "{{ default "mydockerhub/" $.Values.custom_registry }}{{ $step.image }}:{{ default "latest" $step.tag }}"
              imagePullPolicy: Always

              {{ if or $step.volume (not (eq ($step.volume | toString) "<nil>"))}}
              volumeMounts:
              - name: {{ $.Release.Name  }}-{{ $step.name }}-data
                mountPath: {{ default "/app/persistent" $step.mountPath}}
              {{ end }}

              {{ if $step.env }}
              env:
                {{- toYaml $step.env | nindent 14 }}
              {{ end }}
          {{- end }}
          containers:
            - name: {{ .Release.Name  }}-job-done
              image: busybox
              command: ['sh', '-c', 'echo "custom pipeline completed"']
          restartPolicy: Never
{{ end }}

Solution

  • This behavior is expected, see https://github.com/kubernetes/kubernetes/issues/63371

    The only thing you can do is to set .spec.startingDeadlineSeconds field to something small like 200 seconds. In this case the cronjob won't run if updated more than 200 seconds after the configured schedule.