all
I built up a k3s cluster with K3S_DATASTORE_ENDPOINT=${etcd_cluster_1}, all things go well.
Today, I've change every etcd node's ip in ${etcd_cluster_1} by some reason, so ${etcd_cluster_1} must be changed to ${etcd_cluster_2}
When I restart etcd cluster well and restart k3s, I found waring etcd-0(1,3) not healthy in Rancher.
I think I should migrate ${etcd_cluster_1} to ${etcd_cluster_2}, how can I do this?
I found that, K3S_DATASTORE_ENDPONT env is stored at /etc/systemd/system/k3s.service.env
, I've modified it's content point to the ${etcd_cluster_2} and restarted the k3s with systemctl restart k3s
solved this issue.