pythonmultiprocessingpython-multiprocessing

Single thread file loading and multiprocessing


I have a big data file (a few GB up to a few ten GB), which I want to read and process in multiple threads in python.

My current approach to read the file in parts (let's say 100MB) and pass that information to the threads like so:

from multiprocessing import Pool

with Pool(processes=8) as pool:

    file_chunk = readFileInJunks() #iterator

    for i in range(8):
        pool.apply_async(f, (next(file_chunk),))

    pool.close()
    pool.join()

This is all fun and games, until the file is bigger than my memory. So my question is: how do I avoid running into the memory limit.

I could let each process load its data chunk by itself, but that would mean that n processes will access the same file simultaneously, probably slowing down the reading (especially for older disks). Another option would be manually calling apply_async when a slot gets free, kind of defeating the whole point of a thread pool.

Is there a more elegant way to handle this?


Solution

  • What you need to do is to break the file down into smaller chunks so that your 8 pool processes have enough memory to work on 8 chunks at a time. But you may be able to submit chunks to the pool more quickly than they can be processed by the pool. This will result in the pool's task queue filling up with these chunks awaiting for an idle pool process to work on it. So we need a mechanism to throttle the reading and submission of chunks to the pool. It might be useful for you to choose a chunk size so that 8 chunks can be simultaneously processed with 8 extra chunks sitting on the task queue waiting to be processed so that there is no delay between a pool process completing work on a chunk and being able to start processing on a new chunk. This means you should choose a chunk size so that 16 chunks can fit into memory.

    But how do we throttle the submission of tasks (chunks) to the pool so that there are never more than 16 chunks in memory? By use of a properly initialized semaphore. In the following code the main process will be able to immediately submit 16 chunks to the pool for processing but will be blocked from submitting the next chunk until a pool process has completed processing a previously submitted chunk:

    from multiprocessing import Pool
    from threading import Semaphore
    
    NUM_PROCESSES = 8  # The pool size
    
    # So that when a pool process completes work on a chunk, there is
    # always at least another chunk immediately available in the task queue
    # to work on:
    SUBMITTED_CHUNKS_COUNT = 2 * NUM_PROCESSES
    
    # Chose a chunksize so that SUBMITTED_CHUNKS_COUNT * CHUNK_SIZE fits in memory:
    CHUNK_SIZE = 10  # For demo purposes (not a realistic vaue)
    
    def read_file_in_chunks():  # Use name that conforms to PEP8 specification
        # For demo purposes just return the same string repeatedly:
        for _ in range(30):
            yield "abcdefghij"  # string of length CHUNK_SIZE
    
    def f(chunk):
        ...  # Do something with chunk
        print(chunk)
    
    def main():
        semaphore = Semaphore(SUBMITTED_CHUNKS_COUNT)
    
        def task_completed(result):
            """Called when a chunk has finished being processed."""
    
            # For now we do not care about the result
            semaphore.release()  # Allow a new chunk to be submitted
    
        with Pool(processes=NUM_PROCESSES) as pool:
            for chunk in read_file_in_chunks():
                semaphore.acquire()  # Throttle submissions
                pool.apply_async(f, args=(chunk,), callback=task_completed)
    
            pool.close()
            pool.join()
    
    if __name__ == '__main__':
        main()
    

    If you prefer larger chunks and are willing to accept the slight delay that will occur when a process completes working on a chunk before being able to process a new chunk, then set SUBMITTED_CHUNKS_COUNT = NUM_PROCESSES in the above code and then you then can use a chunk size that is twice as large.