/opt/kubernetes/bin/rke up --config /home/msh/rancher-cluster.yml
the rancher-cluser.yml file contains:
nodes:
- address: 192.168.10.34
internal_address: 172.17.0.2
user: bsh
role: [controlplane,etcd]
- address: 192.168.10.35
internal_address: 172.17.0.3
user: bsh
role: [worker]
- address: 192.168.10.36
internal_address: 172.17.0.4
user: bsh
role: [worker]
add_job_timeout: 120
Note: I have not configured any interface internal_address on any of the nodes. My understanding is that rancher/k8s will set these up for me . . . or something.
Here's the tail end of rke failing to start.
INFO[0039] Removing container [rke-bundle-cert] on host [192.168.10.34], try #1
INFO[0039] Image [rancher/rke-tools:v0.1.69] exists on host [192.168.10.34]
INFO[0039] Starting container [rke-log-linker] on host [192.168.10.34], try #1
INFO[0040] [etcd] Successfully started [rke-log-linker] container on host [192.168.10.34]
INFO[0040] Removing container [rke-log-linker] on host [192.168.10.34], try #1
INFO[0040] [remove/rke-log-linker] Successfully removed container on host [192.168.10.34]
INFO[0040] [etcd] Successfully started etcd plane.. Checking etcd cluster health
WARN[0055] [etcd] host [192.168.10.34] failed to check etcd health: failed to get /health for host [192.168.10.34]: Get https://172.17.0.2:2379/health: Unable to access the service on 172.17.0.2:2379. The service might be still starting up. Error: ssh: rejected: connect failed (Connection refused)
FATA[0055] [etcd] Failed to bring up Etcd Plane: etcd cluster is unhealthy: hosts [192.168.10.34] failed to report healthy. Check etcd container logs on each host for more information
Using:
Rancher v2.5.2
rke version v1.0.16
docker-ce-19.03.14-3.el8.x86_64
From my understanding the interface configuration has to preexist. RKE will not take care of interface configurations.
Therefore either setup an internal subnet and assign your interfaces to it or use the external address also for the internal communication.