I want to use Huge Pages with memory-mapped files on Linux 3.13.
To get started, on Ubuntu I did this to allocate 10 huge pages:
sudo apt-get install hugepages
sudo hugeadm --pool-pages-min=2048K:10
Then I ran this test program:
#include <assert.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <unistd.h>
int main(void)
{
size_t size = 2 * 1024 * 1024; /* 1 huge page */
int fd = open("foo.bar", O_RDWR|O_CREAT, 0666);
assert(fd >= 0);
int rc = ftruncate(fd, size);
assert(rc == 0);
void* hint = 0;
int flags = MAP_SHARED | MAP_HUGETLB;
void* data = mmap(hint, size, PROT_READ|PROT_WRITE, flags, fd, 0);
if (data == MAP_FAILED)
perror("mmap");
assert(data != MAP_FAILED);
}
It always fails with EINVAL. If you change flags
to MAP_PRIVATE|MAP_ANONYMOUS
then it works, but of course it won't write anything to the file.
I also tried using madvise()
after mmap()
without MAP_HUGETLB
:
rc = madvise(data, size, MADV_HUGEPAGE);
if (rc != 0)
perror("madvise");
assert(rc == 0);
This also fails (EINVAL
) if MAP_ANONYMOUS
is not used.
Is there any way to enable huge pages with memory-mapped files on disk?
To be clear, I am looking for a way to do this in C--I'm not asking for a solution to apply to existing executables (then the question would belong on SuperUser).
It looks like the underlying filesystem you are using does not support memory-mapping files using huge pages.
For example, for ext4 this support is still under development as of January 2017, and not included in the kernel yet (as of May 19, 2017).
If you run a kernel with that patchset applied, do note that you need to enable huge page support in the filesystem mount options, for example adding huge=always
to the fourth column in /etc/fstab
for the filesystems desired, or using sudo mount -o remount,huge=always /mountpoint
.