In the Bigtable paper, Google explains page 4 : "Bigtable relies on a highly-available and persistent distributed lock service called Chubby". They also say they developped it in-house, based on the Paxos algorithm. I find these precisions strange, because the entirety of Bigtable should be highly-available, persistent and distributed, not just the locks. But the paper never speaks again about consensus algorithms (Paxos, raft, ...), neither of what happens when some Bigtable servers fail. By the Chubby locks, each Bigtable tablet has at most one client that writes to it. But that reduction of the number of clients does not help when some Bigtable servers fail.
In section 5.2 they say "each tablet is assigned to one tablet server at a time". In section 5.3, "the persistent state of a tablet is stored in GFS". Do those 2 sentences guarantee that the tablets are replicated? By a derivative of Paxos?
Neither sentence guarantees tablet replication.
Notes:
That whitepaper is old (2006).
GFS was replaced by Colossus.
Colossus storage is replicated.
Bigtable supports replication when you configure more than one cluster (eventual consistency).
I am not aware of a reference that Bigtable replicates tablets. Bigtable splits, merges, and compacts tablets based on the workload of each cluster. Each cluster would have different workloads, which means tablets cannot be replicated.