kubernetescoreoskubectlrkt

kubernetes node doesn't get registered


I'm trying to kubernetes 1.5.2 on Container Linux by CoreOS alpha (1284.2.0) using rkt.

I have two coreos servers, one (controller+worker) with hostname coreos-2.tux-in.com, the 2nd one will be a work with hostname coreos-3.tux-in.com.

for now I'm installing the controller+worker on coreos-2.tux-in.com.

in general I followed the instructions in https://coreos.com/kubernetes/docs/latest/ and aded some modifications.

instead of using deprecated --api-server parameter I use kubeconfig.

the problem that I'm having is that the kube-proxy pod fails with the following error messages:

Jan 14 23:27:34 coreos-2.tux-in.com rkt[11555]: [  220.477192] kube-proxy[5]: E0114 23:27:34.900184       5 server.go:421] Can't get Node "coreos-2.tux-in.com", assuming iptables proxy, err: nodes "coreos-2.tux-in.com" not found
Jan 14 23:27:34 coreos-2.tux-in.com rkt[11555]: [  220.479181] kube-proxy[5]: I0114 23:27:34.902440       5 server.go:215] Using iptables Proxier.
Jan 14 23:27:34 coreos-2.tux-in.com rkt[11555]: [  220.480503] kube-proxy[5]: W0114 23:27:34.903771       5 server.go:468] Failed to retrieve node info: nodes "coreos-2.tux-in.com" not found
Jan 14 23:27:34 coreos-2.tux-in.com rkt[11555]: [  220.481175] kube-proxy[5]: F0114 23:27:34.903829       5 server.go:222] Unable to create proxier: can't set sysctl net/ipv4/conf/all/route_localnet: open /proc/sys/net/ipv4/conf/all/route_localnet: read-only file system

the kubeconfig is located at /etc/kubernetes/controller-kubeconfig.yaml with the following:

apiVersion: v1
kind: Config
clusters:
- cluster:
    server: http://127.0.0.1:8080
  name: tuxin-coreos-cluster
contexts:
- context:
    cluster: tuxin-coreos-cluster
  name: tuxin-coreos-context
kind: Config
preferences:
  colors: true
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/apiserver.pem
    client-key: /etc/kubernetes/ssl/apiserver-key.pem
current-context: tuxin-coreos-context

this is the manifest for kube-apisever:

apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-apiserver
    image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
    command:
    - /hyperkube
    - apiserver
    - --bind-address=0.0.0.0
    - --etcd-servers=http://127.0.0.1:4001
    - --allow-privileged=true
    - --service-cluster-ip-range=10.3.0.0/24
    - --secure-port=443
    - --advertise-address=10.79.218.2
    - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --client-ca-file=/etc/kubernetes/ssl/ca.pem
    - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --runtime-config=extensions/v1beta1/networkpolicies=true
    - --anonymous-auth=false
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        port: 8080
        path: /healthz
      initialDelaySeconds: 15
      timeoutSeconds: 15
    ports:
    - containerPort: 443
      hostPort: 443
      name: https
    - containerPort: 8080
      hostPort: 8080
      name: local
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

and this is the manifest for kube-proxy:

apiVersion: v1
kind: Pod
metadata:
  name: kube-proxy
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-proxy
    image: quay.io/coreos/hyperkube:v1.5.2_coreos.0
    command:
    - /hyperkube
    - proxy
    - --kubeconfig=/etc/kubernetes/controller-kubeconfig.yaml
    securityContext:
      privileged: true
    volumeMounts:
    - mountPath: /etc/kubernetes/controller-kubeconfig.yaml
      name: "kubeconfig"
      readOnly: true
    - mountPath: /etc/kubernetes/ssl
      name: "etc-kube-ssl"
      readOnly: true
    - mountPath: /var/run/dbus
      name: dbus
      readOnly: false
  volumes:
  - name: "ssl-certs"
    hostPath:
      path: "/usr/share/ca-certificates"
  - name: "kubeconfig"
    hostPath:
      path: "/etc/kubernetes/controller-kubeconfig.yaml"
  - name: "etc-kube-ssl"
    hostPath:
      path: "/etc/kubernetes/ssl"
  - hostPath:
      path: /var/run/dbus
    name: dbus

/etc/kubernetes/manifests also includes canal, kube-controller-manager, kube-scheduler and kubernetes-dashboard.

I have kubectl on my desktop configured with the following at ~/.kube/config:

apiVersion: v1
clusters:
- cluster:
    certificate-authority: /Users/ufk/Projects/tuxin-coreos/kubernetes/certs/ca.pem
    server: https://coreos-2.tux-in.com
  name: tuxin-coreos-cluster
contexts:
- context:
    cluster: tuxin-coreos-cluster
    user: default-admin
  name: tuxin-coreos-context
current-context: tuxin-coreos-context
kind: Config
preferences: {}
users:
- name: default-admin
  user:
    username: kubelet
    client-certificate: /Users/ufk/Projects/tuxin-coreos/kubernetes/certs/client.pem
    client-key: /Users/ufk/Projects/tuxin-coreos/kubernetes/certs/client-key.pem

and when I execute kubectl get nodes I get No resources found.

so somehow the current node is not registered...

this is my kubelet.service file:

[Service]
Environment=KUBELET_IMAGE_TAG=v1.5.2_coreos.0
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
  --volume var-log,kind=host,source=/var/log \
  --mount volume=var-log,target=/var/log \
  --volume dns,kind=host,source=/etc/resolv.conf \
  --mount volume=dns,target=/etc/resolv.conf \
  --volume cni-bin,kind=host,source=/opt/cni/bin \
  --mount volume=cni-bin,target=/opt/cni/bin \
  --volume rkt,kind=host,source=/opt/bin/host-rkt \
  --mount volume=rkt,target=/usr/bin/rkt \
  --volume var-lib-rkt,kind=host,source=/var/lib/rkt \
  --mount volume=var-lib-rkt,target=/var/lib/rkt \
  --volume stage,kind=host,source=/tmp \
  --mount volume=stage,target=/tmp"
ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin
ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/usr/bin/mkdir -p /var/log/containers
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
  --kubeconfig=/etc/kubernetes/controller-kubeconfig.yaml \
  --register-schedulable=false \
  --network-plugin=cni \
  --container-runtime=rkt \
  --rkt-path=/usr/bin/rkt \
  --allow-privileged=true \
  --pod-manifest-path=/etc/kubernetes/manifests \
  --hostname-override=coreos-2.tux-in.com \
  --cluster_dns=10.3.0.10 \
  --cluster_domain=cluster.local
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

I have --hostname-override=coreos-2.tux-in.com set, so I guess it's supposed to register the node but doesn't.

what do I do from here?


Solution

  • I needed to add --require-kubeconfig parameter to the kubelet-wrappper execution of kubelet.service. this tells kubelet to configure api server from the kubeconfig file.