Imagine a cluster running on the Raft protocol that has numerous nodes running in completely two separated networks (i.e. AWS VPC). This cluster is running fine for a while and has exactly one master as expected. All of a sudden something goes wrongs and the connection between the two networks break! Now, we have two groups of nodes. In the group that has lost connection to the master, nodes start an election and pick up another master!
The clients which are outside the network can still see all nodes! Now, clients are actually communicating with two clusters each having its own state!
This definitely breaks the replicated log consistency. How exactly is it handled or should it be handled in Raft?
Virtually every consensus protocol requires a majority of nodes to be available to elect a leader. In your example, there are two options: