I have a scenario in my Python application where I have some work that needs to be done in the background, but in a fire-and-forget manner. There are two constraints I'm trying to satisfy:
What's the simplest way of handling a situation like this? I imagine that I have to create a single long-lived worker thread or process to which I can continually queue tasks, but I don't want to write custom code to handle all of this if I don't have to.
The simplest solution I've found, which is satisfactory for my needs, is to use Python's multiprocessing.Pool
to create a pool containing exactly one process, keep that process pool around for the life of my application, and use apply_async
to execute the task on that process pool in a fire-and-forget manner:
from multiprocessing import Pool
...
class MyClass:
def __init__(self):
self.process_pool = None
def my_pyjulia_task(self, arg1, arg2):
...
def run(self, arg1, arg2):
if not self.process_pool:
self.process_pool = Pool(processes=1)
self.process_pool.apply_async(self.my_pyjulia_task, (arg1, arg2))
This ensures that PyJulia doesn't block the main execution thread and always runs in the same "background" process (and also the same thread). Apparently apply_async
also queues up work for that process, since I can call it many times and the tasks are executed in order.
It also wouldn't be hard to use multiprocessing
tools to enable communication from this process back to the main application, if that ever became necessary.