In a Linux with 4096 bytes as the size of a memory-page, we perform virtual memory mapping using mmap
function and let's say we ask for 1048576 bytes (1MB). So here we have 1048576 / 4096 memory pages (256 memory-pages). Now the question is that all pages are serial (near each other) in physical memory or not ? I want to know for a 1 MB virtual memory, by accessing its first byte, we get first-time TLB
hit for its other memory pages or each memory page (4096) may not be in serial in physical memory and for each page accessing, there maybe a TLB miss
?
Let's put it simpler, 1 MB virtual memory allocated (256 memory pages). The first time access to this memory will be TLB miss
since it's physical address is not exists in TLB so finally there will be a TLB hit
but after that by accessing to other pages of this virtual memory, does we have TLB hit
without TLB miss
for each memory pages of this memory or each page may exists in random location of physical memory and our memory pages aren't in serial ?
There is certainly no guarantee, in Linux or any other OS that I know of, that contiguous virtual pages will correspond to contiguous physical pages. One of the main advantages of virtual memory is that it avoids fragmentation issues - even if physical memory has only scattered pages available, they can still be mapped to a contiguous region of virtual memory. Providing a guarantee of contiguous physical pages would defeat this.
Generally, you should expect that accessing virtual memory for the first time will incur a TLB miss and page table walk for each page, regardless of whether they are physically contiguous or not.
Some CPUs might be able to predict, given a pattern of sequential access, that later pages in the sequence may also be accessed soon, and try to populate the TLB in advance. But if so, I'd expect this to be based on whether the pages are sequential in terms of their virtual addresses. After all, programs see virtual memory, so when they access a region in sequence, it's going to be sequential virtual addresses, and so this is the case I'd expect CPUs to optimize for. So there'd be no benefit of giving you physically continuous pages anyway.
If you want a large block of memory that's contiguous both in physical and virtual memory, in order to minimize the number of TLB misses, page faults, etc, then what you want is huge pages.