crashacid

Are disk sector writes atomic?


Clarified Question:

When the OS sends the command to write a sector to disk is it atomic? i.e. Write of new data succeeds fully or old data is left intact should the power fail immediately following the write command. I don't care about what happens in multiple sector writes - torn pages are acceptable.

Old Question:

Say you have old data X on disk, you write new data Y over it, and a tree falls on the power line during that write. With no fancy UPS or battery backed disk controller, you can end up with a torn page, where the data on disk is part X and part Y. Can you ever end up with a situation where the data on disk is part X, part Y, and part garbage?

I've been trying to understand the design of ACID systems like databases, and to my naive thinking, it seems firebird, which does not use a write-ahead log, is relying that a given write will not destroy old data (X) - only fail to fully write new data (Y). That means that if part of X is being overwritten, only the part of X that is being overwritten can be changed, not the part of X we intend to keep.

To clarify, this means if you have a page sized buffer, say 4096 bytes, filled with half Y, half X that we want to keep - and we tell the OS to write that buffer over X, there is no situation short of serious disk failure where the half X that we want to keep is corrupted during the write.


Solution

  • The traditional (SCSI, ATA) disk protocol specifications don't guarantee that any/every sector write is atomic in the event of sudden power loss (but see below for discussion of the NVMe spec). However, it seems tacitly agreed that non-ancient "real" disks quietly try their best to offer this behaviour (e.g. Linux kernel developer Christoph Hellwig mentions this off-hand in the 2017 presentation "Failure-Atomic file updates for Linux").

    When it comes to synthetic disks (e.g. network attached block devices, certain types of RAID etc.) things are less clear and they may or may not offer sector atomicity guarantees while legally behaving per their given spec. Imagine a RAID 1 array (without a journal) comprised of a disk that offers 512 byte sized sectors but where the other disk offered a 4KiB sized sector thus forcing the RAID to expose a sector size of 4KiB. As a thought experiment, you can construct a scenario where each individual disk offers sector atomicity (relative to its own sector size) but where the RAID device does not in the face of power loss. This is because it would depend on whether the 512 byte sector disk was the one being read by the RAID and how many of the 8 512-byte sectors compromising the 4KiB RAID sector it had written before the power failed.

    Sometimes specifications offer atomicity guarantees but only on certain write commands. The SCSI disk spec is an example of this and the optional WRITE ATOMIC(16) command can even give a guarantee beyond a sector but being optional it's rarely implemented (and thus rarely used). The more commonly implemented COMPARE AND WRITE is also atomic (potentially across multiple sectors too) but again it's optional for a SCSI device and comes with different semantics to a plain write...

    Curiously, the NVMe spec was written in such a way to guarantee sector atomicity thanks to Linux kernel developer Matthew Wilcox. Devices that are compliant with that spec have to offer a guarantee of sector write atomicity and may choose to offer contiguous multi-sector atomicity up to a specified limit (see the AWUPF field). However, it's unclear how you can discover and use any multi-sector guarantee if you aren't currently in a position to send raw NVMe commands...

    Andy Rudoff is an engineer who talks about investigations he has done on the topic of write atomicity. His presentation "Protecting SW From Itself: Powerfail Atomicity for Block Writes" (slides) has a section of video where he talks about how power failure impacts in-flight writes on traditional storage. He describes how he contacted hard drive manufacturers about the statement "a disk's rotational energy is used to ensure that writes are completed in the face of power loss" but the replies were non-committal as to whether that manufacturer actually performed such an action. Further, no manufacturer would say that torn writes never happen and while he was at Sun, ZFS added checksums to blocks which led to them uncovering cases of torn writes during testing. It's not all bleak though - Andy talks about how sector tearing is rare and if a write is interrupted then you usually get only the old sector, or only the new sector, or an error (so at least corruption is not silent). Andy also has an older slide deck Write Atomicity and NVM Drive Design which collects popular claims and cautions that a lot of software (including various popular filesystems on multiple OSes) are actually unknowingly dependent on sector writes being atomic...

    (The following takes a Linux centric view but many of the concepts apply to general-purpose OSes that are not being deployed in a tightly controlled hardware environments)

    Going back to 2013, BtrFS lead developer Chris Mason talked about how (the now defunct) Fusion-io had created a storage product that implemented atomic operation (Chris was working for Fusion-io at the time). Fusion-io also created a proprietary filesystem "DirectFS" (written by Chris) to expose this feature. The MariaDB developers implemented a mode that could take advantage of this behaviour by no longer doing double buffering resulting in "43% more transactions per second and half the wear on the storage device". Chris proposed a patch so generic filesystems (such as BtrFS) could advertise that they provided atomicity guarantees via a new flag O_ATOMIC but block layer changes would also be needed. Said block layer changes were also proposed by Chris in a later patch series that added a function blk_queue_set_atomic_write(). However, neither of the patch series ever entered the mainline Linux kernel and there is no O_ATOMIC flag in the (current 2020) mainline 5.7 Linux kernel.

    Before we go further, it's worth noting that even if a lower level doesn't offer an atomicity guarantee, a higher level can still provide atomicity (albeit with performance overhead) to its users so long as it knows when a write has reached stable storage. If fsync() can tell you when writes are on stable storage (technically not guaranteed by POSIX but the case on modern Linux) then because POSIX rename is atomic you can use the create new file/fsync/rename dance to do atomic file updates thus allowing applications to do double buffering/Write Ahead Logging themselves. Another example lower down in the stack are Copy On Write filesystems like BtrFS and ZFS. These filesystems give userspace programs a guarantee of "all the old data" or "all the new data" after a crash at sizes greater than a sector because of their semantics even though a disk many not offer atomic writes. You can push this idea all the way down into the disk itself where NAND based SSDs don't overwrite the area currently used by an existing LBA and instead write the data to a new region and keep a mapping of where the LBA's data is now.

    Resuming our abridged timeline, in 2015 HP researchers wrote a paper Failure-Atomic Updates of Application Data in a Linux File System (PDF) (media) about introducing a new feature into the Linux port of AdvFS (AdvFS was originally part of DEC's Tru64):

    If a file is opened with a new O_ATOMIC flag, the state of its application data will always reflect the most recent successful msync, fsync, or fdatasync. AdvFS furthermore includes a new syncv operation that combines updates to multiple files into a failure-atomic bundle [...]

    In 2017, Christoph Hellwig wrote experimental patches to XFS to provide O_ATOMIC. In the "Failure-Atomic file updates for Linux" talk (slides) he explains how he drew inspiration from the 2015 paper (but without the multi-file support) and the patchset extends the XFS reflink work that already existed. However, despite an initial mailing list post, at the time of writing (mid 2020) this patchset is not in the mainline kernel.

    During the database track of the 2019 Linux Plumbers Conference, MySQL developer Dimitri Kravtchuk asked if there were plans to support O_ATOMIC (link goes to start of filmed discussion). Those assembled mention the XFS work above, that Intel claim they can do atomicity on Optane but Linux doesn't provide an interface to expose it, that Google claims to provide 16KiB atomicity on GCE storage1. Another key point is that many database developers need something larger than 4KiB atomicity to avoid having to do double writes - PostgreSQL needs 8KiB, MySQL needs 16KiB and apparently the Oracle database needs 64KiB. Further, Dr Richard Hipp (author of the SQLite database) asked if there's a standard interface to request atomicity because today SQLite makes use of the F2FS filesystem's ability to do atomic updates via custom ioctl()s but the ioctl was tied to one filesystem. Chris replied that for the time being there's nothing standard and nothing provides the O_ATOMIC interface.

    At the 2021 Linux Plumbers Conference Darrick Wong re-raised the topic of atomic writes (link goes to start of filmed discussion). He pointed out there are two different things that people mean when they say they want atomic writes:

    1. Hardware provides some atomicity API and this capability is somehow exposed through the software stack
    2. Make the filesystem do all the work to expose some sort of atomic write API irrespective of hardware

    Darrick mentioned that Christoph had ideas for 1. in the past but Christoph has not come back to the topic and further there are unanswered questions (how you make userspace aware of limits, if the feature was exposed it would be restricted to direct I/O which may problematic for many programs). Instead Darrick suggested tackling 2. was to propose his FIEXCHANGE_RANGE ioctl which swaps the contents of two files (the swap is restartable if it fails part way through). This approach doesn't have the limits (e.g. smallish contiguous size, maximum number of scatter gather vectors, direct I/O only) that a hardware based solution would have and could theoretically be implementable in the VFS thus being filesystem agnostic...

    At the 2023 Linux Storage, Filesystem, Memory-Management and BPF Summit there was discussion about exposing device atomicity in a way that could be used by an application. Prior to the talk an RFC patch series "block atomic writes" had been posted that:

    Only direct I/O and XFS were supported by the patch series.

    The benefit of the work is described as follows:

    With this new interface, application blocks will never be torn or fractured. For a power fail, for each individual application block, all or none of the data to be written. A racing atomic write and read will mean that the read sees all the old data or all the new data, but never a mix of old and new.

    TLDR; if you are in tight control of your whole stack from application all the way down the the physical disks (so you can control and qualify the whole lot) you can arrange to have what you need to make use of disk atomicity. If you're not in that situation or you're talking about the general case, you should not depend on sector writes being atomic.

    When the OS sends the command to write a sector to disk is it atomic?

    At the time of writing (mid-2020):

    a sector write sent by the kernel is likely atomic (assuming a sector is no bigger than 4KiB). In controlled cases (battery backed controller, NVMe disk which claims to support atomic writes, SCSI disk where the vendor has given you assurances etc.) a userspace program may be able to use O_DIRECT so long as O_DIRECT wasn't reverting to being buffered, the I/O didn't get split apart/merged at the block layer / you are sending device specific commands and are bypassing the block layer. However, in the general case neither the kernel nor a userspace program can safely assume sector write atomicity.

    Can you ever end up with a situation where the data on disk is part X, part Y, and part garbage?

    From a specification perspective if you are talking about a SCSI disk doing a regular SCSI WRITE(16) and a power failure happening in the middle of that write then the answer is yes: a sector could contain part X, part Y AND part garbage. A crash during an inflight write means the data read from the area that was being written to is indeterminate and the disk is free to choose what it returns as data from that region. This means all old data, all new data, some old and new, all zeros, all ones, random data etc. are all "legal" values to return for said sector. From an old draft of the SBC-3 spec:

    4.9 Write failures

    If one or more commands performing write operations are in the task set and are being processed when power is lost (e.g., resulting in a vendor-specific command timeout by the application client) or a medium error or hardware error occurs (e.g., because a removable medium was incorrectly unmounted), the data in the logical blocks being written by those commands is indeterminate. When accessed by a command performing a read or verify operation (e.g., after power on or after the removable medium is mounted), the device server may return old data, new data, or vendor-specific data in those logical blocks.

    Before reading logical blocks which encountered such a failure, an application client should reissue any commands performing write operations that were outstanding.


    1 In 2018 Google announced it had tweaked its cloud SQL stack and that this allowed them to use 16k atomic writes MySQL's with innodb_doublewrite=0 via O_DIRECT... The underlying customisations Google performed were described as being in the virtualized storage, kernel, virtio and the ext4 filesystem layers. Further, a no longer available beta document titled Best practices for 16 KB persistent disk and MySQL (archived copy) described what end users had to do to safely make use of the feature. Changes included: using an appropriate Google provided VM, using specialized storage, changing block device parameters and carefully creating an ext4 filesystem with a specific layout. However, at some point in 2020 this document vanished from GCE's online guides suggesting such end user tuning is not supported.