I am trying to deploy a K8s cluster from scratch using Kelsey Hightower's Learn Kubernetes the Hard Way guide. In my case I am using Vagrant and VirtualBox.
Each of My Master and Workers have a DHCP network in eth0(10.0.2.x range) for pulling bits from the internet and a eth1 static range (10.10.10.x/24) for internal k8s communication.
[vagrant@master-1 ~]$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
worker-1 Ready <none> 32s v1.12.0 10.0.2.15 <none> CentOS Linux 7 (Core) 3.10.0-957.1.3.el7.x86_64 containerd://1.2.0-rc.0
worker-2 Ready <none> 2s v1.12.0 10.0.2.15 <none> CentOS Linux 7 (Core) 3.10.0-957.1.3.el7.x86_64 containerd://1.2.0-rc.0
I initially did not have the flags -node-ip="10.10.10.x
and -address="10.10.10.x"
setup.
Upon adding - I did remove the nodes and restart the kubelet service hopefully to register the nodes again however it seems to not want to update.
== Following is a sample of the kubelet config:
/var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
/etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--node-ip="$NODE_IP"
--address="$NODE_IP"
--register-node=true \\
--v=2
and kube-api server:
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.10.10.11:2379,https://10.10.10.12:2379 \\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
Also in vagrant I believe eth0 is the NAT device as I see the 10.0.2.15
ip assigned to all vms (master/slaves)
[vagrant@worker-1 ~]$ ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:75:dc:3d brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
valid_lft 84633sec preferred_lft 84633sec
inet6 fe80::5054:ff:fe75:dc3d/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:24:a4:c2 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.206/24 brd 192.168.0.255 scope global noprefixroute dynamic eth1
valid_lft 3600sec preferred_lft 3600sec
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:76:22:4a brd ff:ff:ff:ff:ff:ff
inet 10.10.10.21/24 brd 10.10.10.255 scope global noprefixroute eth2
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe76:224a/64 scope link
valid_lft forever preferred_lft forever
[vagrant@worker-1 ~]$
I guess the ask is how to update the internal-ip and external-ip post changes to the kubelet configuration
I edited /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
- adding the --node-ip flag
to KUBELET_CONFIG_ARGS
and restarted kubelet with:
systemctl daemon-reload
systemctl restart kubelet
And kubectl get nodes -o wide reported the new IP addresses immediately. It took a bit longer when I did it on the master - but it happened eventually.