I have some programm in which multiple processes try to finish some function. My aim now is to stop all the other processes after one process has successfully finished the function.
The python program shown below unfortunately waits until all the processes successfully solved the question given in find function. How can I fix my problem?
import multiprocessing
import random
FIND = 50
MAX_COUNT = 100000
INTERVAL = range(10)
def find(process, initial, return_dict):
succ = False
while succ == False:
start=initial
while(start <= MAX_COUNT):
if(FIND == start):
return_dict[process] = f"Found: {process}, start: {initial}"
succ = True
break;
i = random.choice(INTERVAL)
start = start + i
print(start)
processes = []
manager = multiprocessing.Manager()
return_code = manager.dict()
for i in range(5):
process = multiprocessing.Process(target=find, args=(f'computer_{i}', i, return_code))
processes.append(process)
process.start()
for process in processes:
process.join()
print(return_code.values())
output can be for example:
['Found: computer_0, start: 0', 'Found: computer_4, start: 4', 'Found: computer_2, start: 2', 'Found: computer_1, start: 1', 'Found: computer_3, start: 3']
But this output shows me the program is waiting until all processes are finished ...
Use an Event
to govern if the processes should keep running.
Basically, it replaces succ
with something that works over all processes.
import multiprocessing
import random
FIND = 50
MAX_COUNT = 1000
def find(process, initial, return_dict, run):
while run.is_set():
start = initial
while start <= MAX_COUNT:
if FIND == start:
return_dict[process] = f"Found: {process}, start: {initial}"
run.clear() # Stop running.
break
start += random.randrange(0, 10)
print(start)
if __name__ == "__main__":
processes = []
manager = multiprocessing.Manager()
return_code = manager.dict()
run = manager.Event()
run.set() # We should keep running.
for i in range(5):
process = multiprocessing.Process(
target=find, args=(f"computer_{i}", i, return_code, run)
)
processes.append(process)
process.start()
for process in processes:
process.join()
print(return_code.values())
Note that using __name__
is mandatory for multiprocessing to work properly when the process-creation method is set to 'spawn' which is the default on ms-windows and macOS but also available on linux.
On those systems, the main module is imported into newly created Python processes. This needs to happen without side effects such as starting a process, and the __name__
mechanism ensures that.