I am have a file and I want to process it in a parallelized manner using Python's multiprocessing class. My current code is:
class rand:
def __init__(self):
self.rando = "world"
def do_work2(obj, line):
return line + obj.rando
if __name__ == "__main__":
num_workers = cpu_count() - 2
pool = Pool(num_workers)
ran = rand()
with open("sample.txt") as f:
# chunk the work into batches of 4 lines at a time
results = pool.starmap(do_work2, zip(ran,f), 4)
print(results)
I expect to see all the lines in my file with a concatenated "world" in the end. However when I run this code I get:
TypeError: 'rand' object is not iterable
I get why it is happening, but I am just wondering if there is a way by which I can send class objects to a function and then use class object inside that function, all this while multiprocessing.
Can someone help me please ?
As Michael notes, the error is coming about because zip
expects that each of its arguments are iterable, while your rand
class is not. While Chems' fix works, it needlessly takes up memory, and doesn't account for how large the file is. I'd prefer this way:
from itertools import repeat
pool.starmap(do_work2, zip(repeat(ran), f), 4)
repeat
produces an infinite number of ran
objects (until you quit asking for them). This means that it will produce as many ran
s as f
has lines, without taking up memory in a separate list before being given to zip
, and without needing to calculate how many lines f
has.
I'd just scrap using pool.starmap
and use normal pool.map
though. You can wrap your function in another function, and supply ran
as the first argument. There's two ways of doing this. The quick-and-dirty lambda
way:
pool.map(lambda line: do_work2(ran, line), f, 4)
Or, the arguably more correct way of using partial
:
from functools import partial
pool.map(partial(do_work2, obj=ran), f, 4)
See here for why you may want to prefer partial
to a plain lambda
.