kubernetesdevopskubernetes-podkubernetes-statefulset

kubectl wait - timed out waiting for the condition before the condition met


I am trying to check the pod status when I scale down the statefulset, but "kubectl wait" command exits before the pods are fully terminated.

Statefulset terminate

> kubectl scale statefulset.apps/myneo4j --replicas=0

Kubectl wait

> time kubectl wait --for=condition=delete pod -l app.kubernetes.io/name=neo
timed out waiting for the condition on pods/myneo4j-0
timed out waiting for the condition on pods/myneo4j-1
timed out waiting for the condition on pods/myneo4j-2

real    1m30.163s
user    0m0.122s
sys     0m0.057s

Please suggest how the make the command to wait until the pods are terminated fully without using --timeout condition. timeout will make the command wait even if the pods are fully terminated.


Solution

  • Actually there is a default timeout equals to 30s, in your case you reach the timeout for each replicas (you have 3 replicas), so the total time is 1m30s.

    You can set any other positive value (with the unit) to increase the timeout, zero to check once or a negative value to wait for a week:

    kubectl wait --for=condition=delete pod -l app.kubernetes.io/name=neo --timeout=1h
    

    When you set the timeout, your command waits until the specified condition is seen in the Status field of the resource, or until reaching the timeout, but it will not wait forever, if it does, you should debug your command.

    Notes:

    timeout can't be 0, min is 1s