pythonmultiprocessing

Python Multiprocessing.Pool lazy iteration


I'm wondering about the way that python's Multiprocessing.Pool class works with map, imap, and map_async. My particular problem is that I want to map on an iterator that creates memory-heavy objects, and don't want all these objects to be generated into memory at the same time. I wanted to see if the various map() functions would wring my iterator dry, or intelligently call the next() function only as child processes slowly advanced, so I hacked up some tests as such:

def g():
  for el in xrange(100):
    print el
    yield el

def f(x):
  time.sleep(1)
  return x*x

if __name__ == '__main__':
  pool = Pool(processes=4)              # start 4 worker processes
  go = g()
  g2 = pool.imap(f, go)
  g2.next()

And so on with map, imap, and map_async. This is the most flagrant example however, as simply calling next() a single time on g2 prints out all my elements from my generator g(), whereas if imap were doing this 'lazily' I would expect it to only call go.next() once, and therefore print out only '1'.

Can someone clear up what is happening, and if there is some way to have the process pool 'lazily' evaluate the iterator as needed?

Thanks,

Gabe


Solution

  • Let's look at the end of the program first.

    The multiprocessing module uses atexit to call multiprocessing.util._exit_function when your program ends.

    If you remove g2.next(), your program ends quickly.

    The _exit_function eventually calls Pool._terminate_pool. The main thread changes the state of pool._task_handler._state from RUN to TERMINATE. Meanwhile the pool._task_handler thread is looping in Pool._handle_tasks and bails out when it reaches the condition

                if thread._state:
                    debug('task handler found thread._state != RUN')
                    break
    

    (See /usr/lib/python2.6/multiprocessing/pool.py)

    This is what stops the task handler from fully consuming your generator, g(). If you look in Pool._handle_tasks you'll see

            for i, task in enumerate(taskseq):
                ...
                try:
                    put(task)
                except IOError:
                    debug('could not put task on queue')
                    break
    

    This is the code which consumes your generator. (taskseq is not exactly your generator, but as taskseq is consumed, so is your generator.)

    In contrast, when you call g2.next() the main thread calls IMapIterator.next, and waits when it reaches self._cond.wait(timeout).

    That the main thread is waiting instead of calling _exit_function is what allows the task handler thread to run normally, which means fully consuming the generator as it puts tasks in the workers' inqueue in the Pool._handle_tasks function.

    The bottom line is that all Pool map functions consume the entire iterable that it is given. If you'd like to consume the generator in chunks, you could do this instead:

    import multiprocessing as mp
    import itertools
    import time
    
    
    def g():
        for el in xrange(50):
            print el
            yield el
    
    
    def f(x):
        time.sleep(1)
        return x * x
    
    if __name__ == '__main__':
        pool = mp.Pool(processes=4)              # start 4 worker processes
        go = g()
        result = []
        N = 11
        while True:
            g2 = pool.map(f, itertools.islice(go, N))
            if g2:
                result.extend(g2)
                time.sleep(1)
            else:
                break
        print(result)