kubernetes

My kubernetes cluster IP address changed and now kubectl will no longer connect


So how do I regenerate the admin.conf now that I have a new IP address? Running kubeadm init will just kill everything which is not what I want.


Solution

  • You do not want to use kubeadm reset. That will reset everything and you would have to start configuring your cluster again.

    Well, in your scenario, please have a look on the steps below:

    1. nano /etc/hosts (update your new IP against YOUR_HOSTNAME)

    2. nano /etc/kubernetes/config (configuration settings related to your cluster) here in this file look for the following params and update accordingly

      KUBE_MASTER="--master=http://YOUR_HOSTNAME:8080"

      KUBE_ETCD_SERVERS="--etcd-servers=http://YOUR_HOSTNAME:2379" #2379 is default port

    3. nano /etc/etcd/etcd.conf (conf related to etcd)

      KUBE_ETCD_SERVERS="--etcd-servers=http://YOUR_HOSTNAME/WHERE_EVER_ETCD_HOSTED:2379"

      2379 is default port for etcd. and you can have multiple etcd servers defined here comma separated

    4. Restart kubelet, apiserver, etcd services.

    It is good to use hostname instead of IP to avoid such scenarios.