kubernetesetcd

How etcd cluster data is read and synchronized under k8s clusters?


To build a highly available k8s cluster, you need to build an etcd cluster, which I see in the official k8s documentation.

Each control plane node creates a local etcd member and this etcd member communicates only with the kube-apiserver of this node. The same applies to the local kube-controller-manager and kube-scheduler instances.

That is, kube-apiservice only communicates with etcd of its own node, can we understand that reads and writes happen on etcd of the same node,

But when I was studying etcd, I was told that the clients in the etcd cluster read data through Follower and write data through Leader.

┌──[root@vms100.liruilongs.github.io]-[~/ansible/kubescape]
└─$ETCDCTL_API=3 etcdctl  --endpoints https://127.0.0.1:2379  --cert="/etc/kubernetes/pki/etcd/server.crt"  --key="/etc/kubernetes/pki/etcd/server.key"  --cacert="/etc/kubernetes/pki/etcd/ca.crt" endpoint status --cluster  -w table
+-----------------------------+------------------+---------+---------+-----------+-----------+------------+
|          ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+-----------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://192.168.26.100:2379 |  ee392e5273e89e2 |   3.5.4 |   37 MB |     false |       100 |    3152364 |
| https://192.168.26.102:2379 | 11486647d7f3a17b |   3.5.4 |   36 MB |     false |       100 |    3152364 |
| https://192.168.26.101:2379 | e00e3877df8f76f4 |   3.5.4 |   36 MB |      true |       100 |    3152364 |
+-----------------------------+------------------+---------+---------+-----------+-----------+------------+
┌──[root@vms100.liruilongs.github.io]-[~/ansible/kubescape]
└─$

In fact, there is only one Leader in the cluster. Does this read/write separation of etcd clusters apply to k8s?

I understand that this is contradictory to what is said above. I would like to know how etcd reads and writes in a k8s cluster.


I know very little about etcd, thanks for clearing up my confusion!


Solution

  • In fact, there is only one Leader in the cluster. Does this read/write separation of etcd clusters apply to Kubernetes?

    Yes. In an etc cluster, there is only one leader that does the writes. But etcd internally forwards all requests that needs consensus (e.g. writes) to the leader, so the client application (Kubernetes in our case) does not need to know what etcd node is the leader.

    From etcd FAQ:

    Do clients have to send requests to the etcd leader?

    Raft is leader-based; the leader handles all client requests which need cluster consensus. However, the client does not need to know which node is the leader. Any request that requires consensus sent to a follower is automatically forwarded to the leader. Requests that do not require consensus (e.g., serialized reads) can be processed by any cluster member.