kubernetesvspherekubeadm

Kubernetes kubeadm init fails due to dial tcp 127.0.0.1:10248: connect: connection refused


I'm trying to setup a very simple 2 node k8s 1.13.3 cluster in a vSphere private cloud. The VMs are running Ubuntu 18.04. Firewalls are turned off for testing purposes. yet the initialization is failing due to a refused connection. Is there something else that could be causing this other than ports being blocked? I'm new to k8s and am trying to wrap my head around all of this.

I've placed a vsphere.conf in /etc/kubernetes/ as shown in this gist. https://gist.github.com/spstratis/0395073ac3ba6dc24349582b43894a77

I've also created a config file to point to when I run kubeadm init. Here's the example of it'\s content. https://gist.github.com/spstratis/086f08a1a4033138a0c42f80aef5ab40

When I run sudo kubeadm init --config /etc/kubernetes/kubeadminitmaster.yaml it times out with the following error.

[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Checking sudo systemctl status kubelet shows me that the kubelet is running. I have the firewall on my master VM turned off for now for testing puposes so that I can verify the cluster will bootstrap itself.

   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Sat 2019-02-16 18:09:58 UTC; 24s ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 16471 (kubelet)
    Tasks: 18 (limit: 4704)
   CGroup: /system.slice/kubelet.service
           └─16471 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cloud-config=/etc/kubernetes/vsphere.conf --cloud-provider=vsphere --cgroup-driver=systemd --network-plugin=cni --pod-i

Here are some additional logs below showing that the connection to https://192.168.0.12:6443/ is refused. All of this seems to be causing kubelet to fail and prevent the init process from finishing.

    Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.633721   16471 kubelet.go:2266] node "k8s-master-1" not found
    Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.668213   16471 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://192.168.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master-1&limit=500&resourceVersion=0: dial tcp 192.168.0.1
Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.669283   16471 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.0.12:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.0.12:6443: connect: connection refused
    Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.670479   16471 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.0.12:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master-1&limit=500&resourceVersion=0: dial tcp 192.1
    Feb 16 18:10:22 k8s-master-1 kubelet[16471]: E0216 18:10:22.734005   16471 kubelet.go:2266] node "k8s-master-1" not found

Solution

  • In order to address the error (dial tcp 127.0.0.1:10248: connect: connection refused.), run the following:

    sudo mkdir /etc/docker
    cat <<EOF | sudo tee /etc/docker/daemon.json
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2"
    }
    EOF
    sudo systemctl enable docker
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    sudo kubeadm reset
    sudo kubeadm init
    

    Use the same commands if the same error occurs while configuring worker node.