dockerkuberneteskubectlcrashloopbackoff

CrashLoopBackOff while deploying pod using image from private registry


I am trying to create a pod using my own docker image on localhost.

This is the dockerfile used to create the image :

FROM centos:8

RUN yum install -y gdb

RUN yum group install -y "Development Tools"

CMD ["/usr/bin/bash"]
                   

The yaml file used to create the pod is this :

---
 apiVersion: v1
 kind: Pod
 metadata:
   name: server
   labels:
     app: server
 spec:
   containers:
     - name: server
       imagePullPolicy: Never
       image: localhost:5000/server
       ports:
         - containerPort: 80



root@node1:~/test/server# docker images | grep server
server                                                 latest              82c5228a553d        3 hours ago         948MB
localhost.localdomain:5000/server                      latest              82c5228a553d        3 hours ago         948MB
localhost:5000/server                                  latest              82c5228a553d        3 hours ago         948MB
                                                                                     

The image has been pushed to localhost registry.

Following is the error I receive.

root@node1:~/test/server# kubectl get pods
NAME     READY   STATUS             RESTARTS   AGE
server   0/1     CrashLoopBackOff   5          5m18s

The output of describe pod :

    root@node1:~/test/server# kubectl describe pod server

    Name:         server
Namespace:    default
Priority:     0
Node:         node1/10.0.2.15
Start Time:   Mon, 07 Dec 2020 15:35:49 +0530
Labels:       app=server
Annotations:  cni.projectcalico.org/podIP: 10.233.90.192/32
              cni.projectcalico.org/podIPs: 10.233.90.192/32
Status:       Running
IP:           10.233.90.192
IPs:
  IP:  10.233.90.192
Containers:
  server:
    Container ID:   docker://c2982e677bf37ff11272f9ea3f68565e0120fb8ccfb1595393794746ee29b821
    Image:          localhost:5000/server
    Image ID:       docker-pullable://localhost.localdomain:5000/server@sha256:6bc8193296d46e1e6fa4cb849fa83cb49e5accc8b0c89a14d95928982ec9d8e9
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 07 Dec 2020 15:41:33 +0530
      Finished:     Mon, 07 Dec 2020 15:41:33 +0530
    Ready:          False
    Restart Count:  6
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tb7wb (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-tb7wb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tb7wb
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/server to node1
  Normal   Pulled     4m34s (x5 over 5m59s)  kubelet            Container image "localhost:5000/server" already present on machine
  Normal   Created    4m34s (x5 over 5m59s)  kubelet            Created container server
  Normal   Started    4m34s (x5 over 5m59s)  kubelet            Started container server
  Warning  BackOff    56s (x25 over 5m58s)   kubelet            Back-off restarting failed container

I get no logs :

root@node1:~/test/server# kubectl logs -f server
root@node1:~/test/server# 

I am unable to figure out whether the issue is with the container or yaml file for creating pod. Any help would be appreciated.


Solution

  • Posting this as Community Wiki.

    As pointed by @David Maze in comment section.

    If docker run exits immediately, a Kubernetes Pod will always go into CrashLoopBackOff state. Your Dockerfile needs to COPY in or otherwise install and application and set its CMD to run it.

    Root cause can be also determined by Exit Code. In 3) Check the exit code article, you can find a few exit codes like 0, 1, 128, 137 with description.

    3.1) Exit Code 0

    This exit code implies that the specified container command completed ‘sucessfully’, but too often for Kubernetes to accept as working.

    In short story, your container was created, all action mentioned was executed and as there was nothing else to do, it exit with Exit Code 0.

    A CrashLoopBackOff error occurs when a pod startup fails repeatedly in Kubernetes.`

    Your image based on centos with few additional installations did not have any process in backgroud left, so it was categorized as Completed. As this happen so fast, kubernetes restarted it and it fall in loop.

    $ kubectl run centos --image=centos
    $ kubectl get po -w
    NAME     READY   STATUS             RESTARTS   AGE
    centos   0/1     CrashLoopBackOff   1          5s
    centos   0/1     Completed          2          17s
    centos   0/1     CrashLoopBackOff   2          31s
    centos   0/1     Completed          3          46s
    centos   0/1     CrashLoopBackOff   3          58s
    centos   1/1     Running            4          88s
    centos   0/1     Completed          4          89s
    centos   0/1     CrashLoopBackOff   4          102s
    
    $ kubectl describe po centos | grep 'Exit Code'
          Exit Code:    0
    

    But when you have used sleep 3600, in your container, command sleep was executing for hour. After this time it would also exit with Exit Code 0.

    Hope it clarified.