bashmultiprocessinglockinglockfile

Is lockfile necessary for reading and writing the same file of two processes


I'm working with Bash script and meeting such a situation:

one bash script will write things into a file, and the other bash script will read things from the same file.

In this case, is lockfile necessary? I think I don't need to use lockfile because there are only one reading process and only one writing process but I'm not sure.

Bash write.sh:

#!/bin/bash

echo 'success' > tmp.log


Bash read.sh:

#!/bin/bash
while :
do
    line=$(head -n 1 ./tmp.log)
    if [[ "$line" == "success" ]]; then
        echo 'done'
        break
    else
        sleep 3
    fi
done

BTW, the write.sh could write several key words, such as success, fail etc.


Solution

  • While many programmers ignore this, you can potentially run into a problem because writing to the file is not atomic. When the writer does

    echo success > tmp.log
    

    it could be split into two (or more) parts: first it writes suc, then it writes cess\n.

    If the reader executes between those steps, it might get just suc rather than the whole success line. Using a lockfile would prevent this race condition.

    This is unlikely to happen with short writes from a shell echo command, which is why most programmers don't worry about it. However, if the writer is a C program using buffered output, the buffer could be flushed at arbitrary times, which would likely end with a partial line.

    Also, since the reader is reading the file from the beginning each time, you don't have to worry about starting the read where the previous one left off.

    Another way to do this is for the writer to write into a file with a different name, then rename the file to what the reader is looking for. Renaming is atomic, so you're guaranteed to read all of it or nothing.