I changed API server IP addr to internal network address on all master nodes . Also i changed it in kube-controller-manager and kubelet . After restart kubelet, kube-controller-manager got new ip addr and work properly but kube-scheduler didn't refresh new information about IP addr.
What did I do :
I changed IP addr in /etc/kubernetes/scheduler.conf on each master node in section server where previous value was https://127.0.0.1:6443 now https://<internal_ip_of_this_master_node:6443
I recreate kube-scheduler pods on control plane
Restart kubelet on each master node
and in logs of scheduler on master nodes i see :
master1
E0527 17:05:30.129658 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-scheduler: Get "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=5s": dial tcp 127.0.0.1:6443: connect: connection refused
E0527 17:05:33.388533 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-scheduler: Get "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=5s": dial tcp 127.0.0.1:6443: connect: connection refused
E0527 17:05:35.812466 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-scheduler: Get "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=5s": dial tcp 127.0.0.1:6443: connect: connection refused
E0527 17:05:39.611595 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-scheduler: Get "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=5s": dial tcp 127.0.0.1:6443: connect: connection refused
E0527 17:05:43.051832 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-scheduler: Get "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=5s": dial tcp 127.0.0.1:6443: connect: connection refused
E0527 17:05:46.371439 1 leaderelection.go:332] error retrieving resource lock kube-system/kube-scheduler: Get "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=5s": dial tcp 127.0.0.1:6443: connect: connection refused
I0527 17:06:07.483982 1 leaderelection.go:260] successfully acquired lease kube-system/kube-scheduler
master2
E0527 20:21:40.340713 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://127.0.0.1:6443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W0527 20:21:40.662843 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E0527 20:21:40.662954 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W0527 20:21:40.785335 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E0527 20:21:40.785414 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W0527 20:21:41.866142 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E0527 20:21:41.866225 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
master3
W0527 20:22:08.468571 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E0527 20:22:08.468677 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://127.0.0.1:6443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W0527 20:22:10.338980 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E0527 20:22:10.339106 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://127.0.0.1:6443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W0527 20:22:15.962091 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E0527 20:22:15.962156 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1:6443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W0527 20:22:17.945875 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
E0527 20:22:17.945964 1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://127.0.0.1:6443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
W0527 20:22:21.984112 1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: Get "https://127.0.0.1:6443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
my kube-scheduler pod manifest in /etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=0.0.0.0
- --config=/etc/kubernetes/kubescheduler-config.yaml
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- --profiling=False
image: registry.k8s.io/kube-scheduler:v1.29.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
startupProbe:
failureThreshold: 30
httpGet:
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
- mountPath: /etc/kubernetes/kubescheduler-config.yaml
name: kubescheduler-config
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /etc/kubernetes/kubescheduler-config.yaml
type: ""
name: kubescheduler-config
status: {}
Ip address changed in file /etc/kubernetes/scheduler.conf
cluster info :
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m1 Ready control-plane 5d5h v1.29.3 <IP> <none> Ubuntu 22.04.4 LTS 5.15.0-25-generic containerd://1.7.15
m2 Ready control-plane 5d5h v1.29.3 <IP> <none> Ubuntu 22.04.4 LTS 5.15.0-25-generic containerd://1.7.15
m3 Ready control-plane 5d5h v1.29.3 <IP> <none> Ubuntu 22.04.4 LTS 5.15.0-25-generic containerd://1.7.15
w1 Ready worker 5d5h v1.29.3 <IP> <none> Ubuntu 22.04.4 LTS 5.15.0-107-generic containerd://1.7.15
w2 Ready worker 5d5h v1.29.3 <IP> <none> Ubuntu 22.04.4 LTS 5.15.0-107-generic containerd://1.7.15
The kube-scheduler doesnt support dynamic configuration changes. To set a new server address for kube-scheduler, you need to manually update the configuration file on the master node and restart the scheduler. You can try updating the KubeSchedulerConfiguration:
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: /path/to/new/kubeconfig
Then just restart the scheduler:
systemctl restart kube-scheduler