c++page-fault

What may cause page fault at C++ level


I'm a C++ developer and I'm wondering what may cause Page Fault at the C++ level.

I've read some articles about Page Fault and I think fork() and malloc/new can cause Page Fault.

Are there other reasons which can cause Page Fault?

Does an executable file with a huge size have a higher possibility to cause Page Fault?

Does an executable file with very complex logical structure have a higher possibility to cause Page Fault?


Solution

  • Actually, malloc doesn't cause any page faults. The memory is only allocated virtually, so until you use it, it doesn't take up space neither on RAM nor on the disk. If you really want to cause page faults rapidly, you'll have to actually access the buffers in question either for read or for write.

    It all boils down to memory usage, if the application is accessing the same 2-3 GB of data, it may be able to live almost without any pagefaults ocurring (assumin that no other application is currently abusing your RAM). So only when your application needs to access a lot of memory, or memory that has gone "cold" for lack of use, you'll have pagefaults.

    Additionally, the OS loads entire pages from the disk even if you need to access a single byte from that page. This means that if your data is spread across a large area in the memory, you may experience more page faults, than if all of your data was cancentrated in the same vicinity.

    A good test application to uderstand this mechanism would be to allocate huge buffers, more than your RAM can hold, and then to start to modify a single character in 4K intervals (the usual size of a single page in both Linux and Windows). The idea is to dirty as many pages as possible with minimal effort, similar to the concept of ruining a perfectly good package of white paper with a single black dot on every page until your RAM cannot hold so many dirty pages and has to swap them to the disk in order to load other pages for you to dirty.

    while (true) {
        char * data = malloc(HUGE_NUMBER)
        for (size_t i=0 ; i<HUGE_NUMBER ; i+=4096)
            data[i] = (char)rand(); // dirty in 4K intervals
    }
    

    So a good approach to minimize page faults would be to keep a high memory locality of your data access patterns (use arrays which are sequncial in memory and not lists or maps that may spread all over), and to avoid writing applications that require more RAM than what the target server has to offer.

    Regading the executable size, it also depends on how much of the code is actually in use. If your apllication spends 90% of its time running 10% of the code, then the probability for a page faults due to the size of the executable is low, and vice versa.