I've a loop generating threads every 5 seconds and each thread is trying to append same file using filelock module in python but looks like they end up over-writing the file
import threading
import filelock
def loop():
threading.Timer(5,loop).start()
lock = filelock.FileLock("PATH", timeout=20)
with lock.acquire(timeout=0.1, poll_intervall=0.01):
with open("PATH", "a") as myFile:
myFile.write("DATA\n\n")
lock.release()
Edit: Additional Info: After multiple iteration, the data I found in the file was from last thread and not the first.
Edit: as commented by georgexsh below, append is atomic and hence we don't need lock for it.
I suppose you are using this filelock package, as your code use the lock object as a context manager for with
statement, the lock would be released when the with
block is exited.
write data to lock file is not a wise idea, lock file will be truncated to length 0 when acquiring the lock, as O_TRUNC
being used.
if you change inner with
block to:
import time
with lock.acquire(timeout=0.1, poll_intervall=0.01):
with open("PATH", "a") as myFile:
myFile.write("DATA\n\n")
time.sleep(1000)
the first thread which acquired the file lock will hold it, other threads would be blocked as you expected.