pythonpython-2.7filegarbage-collection

How can I force Python code to read input files again without rebooting my computer


I am scanning through a large number of files looking for some markers. I am starting to be really confident that once I have run through the code one time Python is not rereading the actual files from disk. I find this behavior strange because I was told that one reason I needed to structure my file access in the manner I have is so that the handle and file content is flushed. But that can't be.

There are 9,568 file paths in the list I am reading from. If I shut down Python and reboot my computer it takes roughly 6 minutes to read the files and determine if there is anything returned from the regular expression.

However, if I run the code a second time it takes about 36 seconds. Just for grins, the average document has 53,000 words.

Therefore I am concluding that Python still has access to the file it read in the first iteration.

I want to also observe that the first time I do this I can hear the disk spin (E:\ - Python is on C:). E is just a spinning disk with 126 MB cache - I don't think the cache is big enough to hold the contents of these files. When I do it later I do not hear the disk spin.

Here is the code

import re
test_7A_re = re.compile(r'\n\s*ITEM\s*7\(*a\)*[.]*\s*-*\s*QUANT.*\n',re.IGNORECASE)
no7a = []
for path in path_list:
    path = path.strip()
    with open(path,'r') as fh:
        string = fh.read()
    items = [item for item in re.finditer(test_7A_re,string)]
    if len(items) == 0:
        no7a.append(path)
        continue

I care about this for a number of reasons, one is that I was thinking about using multi-processing. But if the bottleneck is reading in the files I don't see that I will gain much. I also think this is a problem because I would be worried about the file being modified and not having the most recent version of the file available.

I am tagging this 2.7 because I have no idea if this behavior is persistent across versions.

To confirm this behavior I modified my code to run as a .py file, and added some timing code. I then rebooted my computer - the first time it ran it took 5.6 minutes and the second time (without rebooting) the time was 36 seconds. Output is the same in both cases.

The really interesting thing is that even if shut down IDLE (but do not reboot my computer) it still takes 36 seconds to run the code.

All of this suggests to me that the files are not read from disk after the first time - this is amazing behavior to me but it seems dangerous.

To be clear, the results are the same - I believe given the timing tests I have run and the fact that I do not hear the disk spinning that somehow the files are still accessible to Python.


Solution

  • This is caused by caching in Windows. It is not related to Python.

    In order to stop Windows from caching your reads:

    1. Disable paging file in Windows and fill the RAM up to 90%

    2. Use some tool to disable file caching in Windows like this one.

    3. Run your code on a Linux VM on your Windows machine that has limited RAM. In Linux you can control the caching much better

    4. Make the files much bigger, so that they won't fit in cache