kubernetesgoogle-kubernetes-engineamazon-ekskubernetes-deployment

Deployment Controller not selecting pods with same label


I have created a pod using below yaml:

apiVersion: v1
kind: Pod
metadata:
    name: pod2
    labels:
       app: dc1
spec:
    containers:
      - name: cont1
        image: nginx

Now, I am creating a deployment controller with the selector value as app=dc1 using the below command:
kubectl create deploy dc1 --image=nginx

Note: When we create a deployment with the name "dc1", it automatically creates selector app=dc1 for the deployment controller.



I notice that the deployment controller creates a new pod instead of selecting the already existing pod.

NAME                READY   STATUS    RESTARTS   AGE   LABELS
dc1-969ff47-ljbxk   1/1     Running   0          32m   app=dc1,pod-template-hash=969ff47
pod1                1/1     Running   0          33m   app=dc1

Question:
Why dc1 is not selecting the existing pod1 which has the same label app=dc1?




Solution

  • If you check your replica set after deploying the deployment. You will notice a new label by the name pod-template-hash

    kubectl get replicasets  dc1-xxxxx  -o yaml
    

    It is generated by hashing the PodTemplate

    labels:
        app: dc1
        pod-template-hash: xxxxxxxx
    

    If you define both labels on your pod, your pod then will be managed by Deployments.

    But this is not recommended by the Kubernetes

    Note: You should not create other Pods whose labels match this selector, either directly, by creating another Deployment, or by creating another controller such as a ReplicaSet or a ReplicationController. If you do so, the first Deployment thinks that it created these other Pods. Kubernetes does not stop you from doing this.