concurrencyfilesystemsdistributed-systemgfs

Google File System Consistency Model


I was reading about GFS and its consistency model but I'm failing to grasp some of it. In particular, can someone provide me with a specific example scenario (or an explanation of why it cannot happen) of:


Solution

  • I'm quoting from http://research.google.com/archive/gfs.html. Check out Table 1, which is a summary of the possible outcomes for writes/appends:

    Table 1 from GFS whitepaper

    1. "If a record append fails at any replica, the client retries the operation. As a result, replicas of the same chunk may contain different data possibly including duplicates of the same record in whole or in part." So any failure on a replica (e.g. timeout) will cause a duplicate record at least on the other replicas. This can happen without concurrent writes.

    2. The same situation that causes a duplicate record also causes an inconsistent (and hence undefined) region. If a replica failed to acknowledge the mutation, it may not have performed it. In that case when the client retries the append this replica will have to add padding in place of the missing data, so that the record can be written at the right offset. So one replica will have padding while other will have the previously written record in this region.

    3. A failed write can cause an inconsistent (hence undefined) region as well. More interestingly, successful concurrent writes can cause consistent but undefined regions. "If a write by the application is large or straddles a chunk boundary, GFS client code breaks it down into multiple write operations. They [...] may be interleaved with and overwritten by concurrent operations from other clients. Therefore, the shared file region may end up containing fragments from different clients, although the replicas will be identical because the individual operations are completed successfully in the same order on all replicas. This leaves the file region in consistent but undefined state [...]."