# Clarification:
# function f() is the only function that would acquire both locks
# It is protected by other locks so f() itself has no concurrency.
# It always acquires lock1 first and then acquire lock2 inside lock1
# In other words, NO thread will own lock2 and wait for lock1
def f():
lock1.acquire()
task_protected_by_lock1() # Might acquire lock2 internally
lock2.acquire()
task_protected_by_lock1_and_lock2()
lock1.release()
task_protected_by_lock2() # Might acquire lock1 internally
lock2.release()
However, I found it impossible to correctly handle SIGINT because it will raise a KeyBoardInterrupt exception at random location. I need to guarantee that lock1 and lock2 are both released when control flow exits f()
(i.e. either normal return or unhandled exception).
I am aware that SIGINT can be temporarily masked. However, correctly restoring the mask becomes another challenge because it might already been masked from outside. Also, the tasks performed between locks might also tweak signal masks. I believe there has to be a better solution.
I am wondering if there exist a way for me to utilize context-manager (with
statement) to achieve it. I've considered the following, but none would work for my use case:
with
statementdef f():
with lock1, lock2:
task_protected_by_lock1() # Bad: acquiring lock2 internally will cause deadlock
task_protected_by_lock1_and_lock2() # Good
task_protected_by_lock2() # Bad: acquiring lock1 internally will cause deadlock
with
statementdef f():
with lock1:
task_protected_by_lock1() # Good
with lock2:
task_protected_by_lock1_and_lock2() # Good
task_protected_by_lock2() # Bad: acquiring lock1 internally will cause deadlock
def f():
flag1 = False
flag2 = False
try:
lock1.acquire()
# Bad: SIGINT might be raised here
flag1 = True
task_protected_by_lock1()
lock2.acquire()
# Bad: SIGINT might be raised here
flag2 = True
task_protected_by_lock1_and_lock2()
lock1.release()
# Bad: SIGINT might be raised here
flag1 = False
task_protected_by_lock2()
lock2.release()
# Bad: SIGINT might be raised here
flag2 = False
except Exception as e:
if flag1:
lock1.release()
if flag2:
lock2.release()
raise e
def f():
try:
lock1.acquire()
task_protected_by_lock1()
lock2.acquire()
task_protected_by_lock1_and_lock2()
lock1.release()
# Suppose SIGINT happened here, just after another thread acquired lock1
task_protected_by_lock2()
lock2.release()
except Exception as e:
if lock1.locked():
lock1.release() # Bad: lock1 is NOT owned by this thread!
if lock2.locked():
lock2.release()
raise e
def f():
with lock1:
task_protected_by_lock1()
# Bad: other thread might acquire lock1 and modify protected resources.
# This breaks data consistency between 1st and 2nd task.
with lock1, lock2:
task_protected_by_lock1_and_lock2()
# Bad: other thread might acquire lock2 and modify protected resources.
# This breaks data consistency between 2nd and 3rd task.
with lock2:
task_protected_by_lock2()
Here is the logic I am trying to implement. This logic is part of a utility library, therefore the behavior of task()
is dependent how user implements it.
You're welcome to provide a better solution that does not require interlacing lock while retaining the exact same behavior.
lock1 = Lock() # Guards observation/assignment of `task`
lock2 = Lock() # Guards execution of `task`
# Executing it might acquire lock1 and change `task`
task: callable | Generator
def do_task():
lock1.acquire()
if not validate_task(task):
task = None # `task` can be modified here
if task is None:
# observation of task fails
return lock1.release()
lock2.acquire() # Execution lock acquired before observation lock released
task_snapshot = task
lock1.release() # Now other process may update `task`,
# But since execution lock (lock2) is owned here,
# updated task will not be executed till this one finishes.
try:
if callable(task_snapshot):
task_snapshot()
else:
assert isinstance(task_snapshot, Generator)
# This is why lock2 is needed.
# Concurrent next() will throw "generator already executing"
next(task_snapshot)
except:
with lock1:
task = None
lock2.release()
Desired pattern (suppose the 2nd execution of task1 updates the task):
task1, task1, task1, task2, task2, ....
Bad pattern (suppose the same scenario as above):
task1, task1, /task2/, task1, task2, ....
You can interleave context managers by using contextlib.ExitStack
, with a "stack" of just one context manager, because it lets you exit it early with the close()
method:
with ExitStack() as es:
es.enter_context(lock_a)
protected_by_a()
with lock_b:
protected_by_a_and_b()
es.close()
protected_by_b()
You can even push more context managers back on to it after close()
if need be, allowing you to do more complex things:
with ExitStack() as es_a:
with ExitStack() as es_b:
es_a.enter_context(lock_a)
es_b.enter_context(lock_b)
protected_by_a_and_b()
es_b.close()
protected_by_a()
es_b.enter_context(lock_b)
protected_by_a_and_b()
es_a.close()
protected_by_b()
You can even pass the exit stack objects as parameters to other functions for them to close and relock. But then it's up to you to debug the deadlocks in the monster you've created!