I have a Python service that uses Python's virtual threads (threading.Thread
) to handle requests. There is a shared singleton functionality that all threads are trying to access, which is protected using threading.Lock
.
g_lock = threading.Lock()
def my_threaded_functionality():
try:
g_lock.acquire()
# ... Do something with a shared resource ...
finally:
g_lock.release()
In the docs of threading.Lock.acquire
, there is no mentioning of fairness, whereas, in asyncio's asyncio.Lock.acquire
, they mention that the lock is fair.
As I want to prevent starvation of threads, and want to preserve the order of the tasks the same as they have arrived, I would go for asyncio
's Lock if they didn't mention that the locks are not thread safe. The question is whether it should be an issue also with Python's virtual "threads".
Python's virtual threads (
threading.Thread
)
CPython threads are native threads, not virtual threads, the concept of virtual threads doesn't exist in CPython.
asyncio's Lock is not thread-safe, you cannot use it for multithreaded synchronization, only threading.Lock
is safe for multithreaded access.
you can serialize access to this resource with a threadpool of 1 thread, it has a queue internally and guarantees fairness (first-in-first-out), don't use locks. as a bonus you can use loop.run_in_executor to await it in your eventloops.
my_pool = concurrent.futures.ThreadPoolExecutor(max_workers=1)
def non_thread_safe_task():
return "not thread-safe"
async def my_threaded_functionality_async():
my_loop = asyncio.get_running_loop()
result = await my_loop.run_in_executor(my_pool, non_thread_safe_task)
def my_threaded_functionality():
result = my_pool.submit(non_thread_safe_task).result()
concurrent.futures.ThreadPoolExecutor
spawns threads lazily, so it is okay to have it in the global scope, it doesn't create a thread if it is not used, but i'd rather wrap the whole thing in a class.
Note: sending work to other threads and back adds roughly 10-50 microseconds of latency, only use it if you must guarantee order, otherwise just use a threading.Lock
g_lock = threading.Lock()
def my_threaded_functionality():
with g_lock:
# ... Do something with a shared resource ...
there's also an async version to lock threading.Lock
in an eventloop (which also has this extra 10-50 microseconds of overhead ... , i'd probably use the 1 worker thread_pool if you are in async code, that's also multithreaded)