We plan to use TiDB for a distributed setup in Europe and Australia.
Have anyone some experience with such a distributed setup?
TiDB developer here.
According to your description, your scenario is a long-distance cross-data center scenario. In such scenarios, you need to understand in this kind of deployment your read and write latency will depend heavily on the latency between your data centers.
A more reasonable deployment is, if your workload is mainly in Europe, and you need strong consistency and high availability at the same time, then you can choose two IDCs in Europe and 1 IDC in Australia to deploy TiDB, and your application should deploy in Europe. Because for tidb, a write requires most replicas to be written successfully. In this scenario, the write latency is:
latency = min(latency(IDC1, IDC2), latency(IDC2, IDC3), latency(IDC1, IDC3))
Here’s some deploy suggestions and comparison for different scenarios:
1. 3-DC Deployment Solution
TiDB, TiKV and PD are distributed among 3 DCs.
Advantages:
All the replicas are distributed among 3 DCs. Even if one DC is down, the other 2 DCs will initiate leader election and resume service within a reasonable amount of time (within 20s in most cases) and no data is lost. See the following diagram for more information:
Disadvantages:
The performance is greatly limited by the network latency.
For writes, all the data has to be replicated to at least 2 DCs. Because TiDB uses 2-phase commit for writes, the write latency is at least twice the latency of the network between two DCs.
The read performance will also suffer if the leader is not in the same DC as the TiDB node with the read request.
Each TiDB transaction needs to obtain TimeStamp Oracle (TSO) from the PD leader. So if TiDB and PD leader are not in the same DC, the performance of the transactions will also be impacted by the network latency because each transaction with write request will have to get TSO twice.
Optimizations:
If not all of the three DCs need to provide service to the applications, you can dispatch all the requests to one DC and configure the scheduling policy to migrate all the TiKV Region leader and PD leader to the same DC, as what we have done in the following test. In this way, neither obtaining TSO or reading TiKV Regions will be impacted by the network latency between DCs. If this DC is down, the PD leader and Region leader will be automatically elected in other surviving DCs, and you just need to switch the requests to the DC that are still online.
2. 3-DC in 2 cities Deployment Solution
This solution is similar to the previous 3-DC deployment solution and can be considered as an optimization based on the business scenario. The difference is that the distance between the 2 DCs within the same city is short and thus the latency is very low. In this case, we can dispatch the requests to the two DCs within the same city and configure the TiKV leader and PD leader to be in the 2 DCs in the same city.
Compared with the 3-DC deployment, the 3-DC in 2 cities deployment has the following advantages:
However, the disadvantage is that if the 2 DCs within the same city goes down, whose probability is higher than that of the outage of 2 DCs in 2 cities, the TiDB cluster will not be available and some of the data will be lost.
3. 2-DC + Binlog Synchronization Deployment Solution
The 2-DC + Binlog synchronization is similar to the MySQL Master-Slave solution. 2 complete sets of TiDB clusters (each complete set of the TiDB cluster includes TiDB, PD and TiKV) are deployed in 2 DCs, one acts as the Master and one as the Slave. Under normal circumstances, the Master DC handle all the requests and the data written to the Master DC is asynchronously written to the Slave DC via Binlog.
If the Master DC goes down, the requests can be switched to the slave cluster. Similar to MySQL, some data might be lost. But different from MySQL, this solution can ensure the high availability within the same DC: if some nodes within the DC are down, the online business won’t be impacted and no manual efforts are needed because the cluster will automatically re-elect leaders to provide services.
Some of our production users also adopt the 2-DC multi-active solution, which means:
Please be noted that for the 2-DC + Binlog synchronization solution, data is asynchronously replicated via Binlog. If the network latency between 2 DCs is too high, the data in the Slave cluster will fall much behind of the Master cluster. If the Master cluster goes down, some data will be lost and it cannot be guaranteed the lost data is within 5 minutes.
Overall analysis for HA and DR
For the 3-DC deployment solution and 3-DC in 2 cities solution, we can guarantee that the cluster will automatically recover, no human interference is needed and that the data is strongly consistent even if any one of the 3 DCs goes down. All the scheduling policies are to tune the performance, but availability is the top 1 priority instead of performance in case of an outage.
For 2-DC + Binlog synchronization solution, we can guarantee that the cluster will automatically recover, no human interference is needed and that the data is strongly consistent even if any some of the nodes within the Master cluster go down. When the entire Master cluster goes down, manual efforts will be needed to switch to the Slave and some data will be lost. The amount of the lost data depends on the network latency and is decided by the network condition.
Recommendations on how to achieve high performance
As is described previously, in the 3-DC scenario, network latency is really critical for the performance. Due to the high latency, a transaction (10 reads 1 write) will take 100 ms and a single thread can only reach 10 TPS.
This table is the result of our Sysbench test (3 IDC, 2 US-West and 1 US-East):
| threads | tps | qps |
|--------:|--------:|---------:|
| 1 | 9.43 | 122.64 |
| 4 | 36.38 | 472.95 |
| 16 | 134.57 | 1749.39 |
| 64 | 517.66 | 6729.55 |
| 256 | 1767.68 | 22979.87 |
| 512 | 2307.36 | 29995.71 |
| 1024 | 2406.29 | 31281.71 |
| 2048 | 2256.27 | 29331.45 |
Compared with the previously recommended deployment which schedules the TiKV Region leaders to be within one DC, the priority of the PD leader is set by pd-ctl member leader_priority pd1 2
to set the PD leader to be located in the same DC of the TiKV Region leaders, avoiding the overly high network latency of getting TSO.
Based on this, we conclude that if you want to more TPS, you should use more concurrencies in your application.
We recommend the following solutions:
10 reads + 1 write
for each transaction to 100 reads + 10 writes
for higher QPS.For the question about HA, the answer is that no manual operation is needed if the leader's DC fails. This is because even if the leader’s DC fails and the leaders are locked in one DC, most of the replicas still survive and will elect a new leader after the failure thanks to the Raft consensus algorithm. This process is automatic and requires no manual intervention. The service is still available and will not be impacted, only with slight performance degradation.