I am a newcomer to Kubernetes and I would like to know if there are better attributes to accurately describe the state of pod
Kubernetes version: Kubernetes v1.27.3
java client maven:
<dependency>
<groupId>io.kubernetes</groupId>
<artifactId>client-java</artifactId>
<version>18.0.1</version>
</dependency>
I am conducting testing in a Kubernetes cluster with two nodes. I am shutting down the Kubelet service on the slave node to simulate a loss of connectivity from the slave node, allowing the pod of the slave node to be automatically scheduled to the master node
When I use the following command:
kubectl get po -n my-namespace -o wide
we can see
nginx1-74499f547c-gbdzf 1/1 Running 0 6h10m 10.244.0.55 kylin-master <none> <none>
nginx1-74499f547c-xndkm 1/1 Terminating 0 8h 10.244.2.24 kylin-worker02 <none> <none>
Obviously, the status of nginx1-74499f547c-xndkm
is Terminating
But when I use the following code to retrieve the pod and traverse it to find a pod with the same name
CoreV1Api coreV1Api = new CoreV1Api();
V1PodList v1PodList;
try{
v1PodList = coreV1Api.listNamespacedPod(tenxOpenApiConfig.getTeamSpace(), null, null, null, info, null, null, null, null, null, null);
for (V1Pod item : v1PodList.getItems()) {
// .....
}
}
// ......
The name of pod is nginx1-74499f547c-xndkm
, but item.status.phase="Running"
.
I can't understand why this is happening, is it a bug? Still, status.phase
is not suitable for representing pod status.
Looking forward to your help.
TL;dr
Kubernetes Pods and containers doesn't have Terminating
as status, and kubectl prints the status based on multiple fields in the Pod object to be more concise (not just pod.status.phase
, like pod.metadata.deletionTimestamp
). Let me explain below in more details.
More details
When a pod is being deleted for any reason (e.g. Node failure or manual deletion), it doesn't happen immediately as it could be disruptive to the application running inside, so kubernetes gives a TERM
(aka. SIGTERM
) signal to the containers inside the pod first to give it some time to terminate gracefully. After some time (defined in pod.spec.terminationGracePeriodSeconds
, defaults to 30
) if the containers aren't terminated yet, it kills them.
During this period, containers are actually running, so that's why kubernetes reports the containers and the pods as running.
However, when the pod is being deleted, kubernetes sets pod.metadata.deletionTimestamp
to the time the delete was issued.
Kubectl is smart enough to detect this and show the status of Pod as Terminating
when it sees this.
Documentation
There is a nice documentation about this here.
Let me highlight a useful paragraph:
Note: When a Pod is being deleted, it is shown as Terminating by some kubectl commands. This Terminating status is not one of the Pod phases. A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.
Additional advice
You can see the pod mostly* as you see it in the code by using -o yaml
. Example:
kubectl get pod -n my-namespace -o yaml
.metadata.managedFields
when viewing any object (including pods) as they are likely won't be useful for you. If you want to make kubectl shows it, you can add --show-managed-fields
to the kubectl command. I mentioned this just for completion of the answer, but if you are starting kubernetes I would recommend ignoring them. There is documentation about them here.Hope this helps :)