K8s Café: Kubernetes POD CrashLoopBackOff status

Arun Kumar Singh
3 min readSep 30, 2019

--

First of all allow me to introduce a new series on Kubernetes. The K8s Café.
In this series we will talk about container technology and K8s. This is the first post in this series and will be followed by few more on different topics. So sit back tight and enjoy learning the K8s. This post is about CrashLoopBackOff status in Kubernetes pods.

A POD in Kubernetes is a smallest entity where a container or containers get deployed on same host. So if you are deploying a container in actual world, you are deploying a POD Kubernetes world. A POD in Kubernetes can have one or multiple containers. It’s a Kubernetes Object. When you run a POD in Kubernetes, you run a container in background. You can list the status of your PODs via kubectl command line utility in Kubernetes realm.

master $ kubectl get pods -n prod
NAME READY STATUS RESTARTS AGE
busybox-aa-645nn 0/1 CrashLoopBackOff 1 12s
busybox-xx-qwgm7 0/1 CrashLoopBackOff 1 12s
nginx-sxx-wecma7 0/1 Running 1 12s

Running a containerized application in Kubernetes is not a happy scenario always. POD sometimes fails as well with Error or CrashLoopBackOff status. Once they fail, you have to use you debugging skills to figure out what went wrong. As a starting step you need to investigate POD and the container inside it. The best suitable command to debug is kubectl describe.

master $ kubectl describe pod busybox-aa-645nn -n prodName:               busybox-aa-645nn
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: node01/172.17.0.8
Start Time: Tue, 01 Oct 2019 20:37:06 +0000
Labels: env=dev
Annotations: <none>
Status: Running
IP: 10.40.0.2
Containers:
busybox:
Container ID: docker://c35a40ccb50d3393dfafc528f5846f34d4e2a2aedc466f5e40c988f4f4c
Image: busybox
Image ID: docker-pullable://busybox@sha256:fe301db49df08c384001ed752df7f608f21528048e8a08b51e
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated

In command output you will find the CrashloopBackOff as a last update. Now this is the point which we wanted to discuss. What is this status and what to do when you are stuck here.

A CrashloopBackOff means that your pod has tried to come up and then crashed, it does the same for fix number of times until it met the condition defined on the POD object. You can manually define these condition of retry on POD or let it be default.

Most of the time you will be able to debug the issue with kubectl describe output, if not then you can check the previous logs on crashed container as well.

master $ kubectl logs busybox-aa-645nn — previous 
(seeing logs of crashed container)

So by this way I assume that you will be able to identify the root cause. Now what next ?

If you are facing issue with image of container itself, then in that case you need to redeploy the container/POD again. If this is not the case, then issue might be with the environment or any other component. I expect you will be able to fix the same as well.

Now you thought to run the failed container again. This scenario relies on how the Pod was created. There is nothing like restarting a POD/Container in Kubernetes. Both Pod and container are ephemeral so you need to identify how it was created.

Most of the time POD gets created by ReplicaSet or Deployment object of Kubernetes. In these cases you can delete the pod and K8 will start one more for you. (Best feature of K8s)

If not then you have to do the manual creation of POD again. If you have a geeky mind then you can play with some docker commands to revive the POD which I feel is not a right idea. So that’s all I wanted to talk about. Happy reading.

--

--

Arun Kumar Singh
Arun Kumar Singh

Written by Arun Kumar Singh

In quest of understanding How Systems Work !

No responses yet