kuberneteskubernetes-podconfigmap

Restart pods when configmap updates in Kubernetes?


How do I automatically restart Kubernetes pods and pods associated with deployments when their configmap is changed/updated?


I know there's been talk about the ability to automatically restart pods when a config maps changes but to my knowledge this is not yet available in Kubernetes 1.2.

So what (I think) I'd like to do is a "rolling restart" of the deployment resource associated with the pods consuming the config map. Is it possible, and if so how, to force a rolling restart of a deployment in Kubernetes without changing anything in the actual template? Is this currently the best way to do it or is there a better option?


Solution

  • Signalling a pod on config map update is a feature in the works (https://github.com/kubernetes/kubernetes/issues/22368).

    You can always write a custom pid1 that notices the confimap has changed and restarts your app.

    You can also eg: mount the same config map in 2 containers, expose a http health check in the second container that fails if the hash of config map contents changes, and shove that as the liveness probe of the first container (because containers in a pod share the same network namespace). The kubelet will restart your first container for you when the probe fails.

    Of course if you don't care about which nodes the pods are on, you can simply delete them and the replication controller will "restart" them for you.