I am trying to understand using memory pools for memory management, but I can't find much about it, even though it seems to be a very common mechanism.
All I know about this is that "Memory pools, also called fixed-size blocks allocation" per Wikipedia, and I can use those chunks to allocate memory for my objects.
Is there any standard specifications about memory pools?
I would like to know how this works on the heap, how it can be implemented, and how this should be used?
From this question about C++11 memory pool design patterns, I've read:
In case you haven't already, familiarize yourself with Boost.Pool. From the Boost documentation:
What is Pool?
Pool allocation is a memory allocation scheme that is very fast, but limited in its usage. For more information on pool allocation (also called simple segregated storage, see concepts concepts and Simple Segregated Storage.
I can understand what he meant, but that doesn't help me understand how to use them and how memory pools can help my application, how to actually make use of them.
A simple example that shows how to use memory pools would be appreciated.
Any kind of "pool" is really just resources you've acquired/initialized in advance so that they're already ready to go, not allocated on the fly with each client request. When clients finish using them, the resource returns to the pool instead of being destroyed.
Memory pools are basically just memory you've allocated in advance (and typically in big blocks). For example, you might allocate 4 kilobytes of memory in advance. When a client requests 64 bytes of memory, you just hand them a pointer to an unused space in that memory pool for them to read and write whatever they want. When the client is done, you can just mark that section of memory as being unused again.
As a basic example which doesn't bother with alignment, safety, or returning unused (freed) memory back to the pool:
class MemoryPool
{
public:
MemoryPool(): ptr(mem)
{
}
void* allocate(int mem_size)
{
assert((ptr + mem_size) <= (mem + sizeof mem) && "Pool exhausted!");
void* mem = ptr;
ptr += mem_size;
return mem;
}
private:
MemoryPool(const MemoryPool&);
MemoryPool& operator=(const MemoryPool&);
char mem[4096];
char* ptr;
};
...
{
MemoryPool pool;
// Allocate an instance of `Foo` into a chunk returned by the memory pool.
Foo* foo = new(pool.allocate(sizeof(Foo))) Foo;
...
// Invoke the dtor manually since we used placement new.
foo->~Foo();
}
This is effectively just pooling memory from the stack. A more advanced implementation might chain blocks together and do some branching to see if a block is full to avoid running out of memory, deal with fixed-size chunks that are unions (list nodes when free, memory for the client when used), and it definitely needs to deal with alignment (easiest way is just max align the memory blocks and add padding to each chunk to align the subsequent one).
More fancy would be buddy allocators, slabs, ones applying fitting algorithms, etc. Implementing an allocator is not so different from a data structure, but you get knee deep in raw bits and bytes, have to think about things like alignment, and can't shuffle contents around (can't invalidate existing pointers to memory being used). Like data structures, there isn't really a golden standard that says, "thou shalt do this". There's a wide variety of them, each with their own strengths and weaknesses, but there are some especially popular algorithms for memory allocation.
Implementing allocators is something that I would actually recommend to many C and C++ developers just to kind of get in tune with the way that memory management works a bit better. It can make you a bit more conscious of how the memory being requested connects to data structures using them, and also opens up a whole new door of optimization opportunities without using any new data structures. It can also make data structures like linked lists which are normally not very efficient much more useful and reduce temptations to make opaque/abstract types less opaque to avoid the heap overhead. However, there can be an initial excitement which might want to make you shoehorn custom allocators for everything, only to later regret the additional burden (especially if, in your excitement, you forget about issues like thread safety and alignment). It's worth taking it easy there. As with any micro-optimization, it's generally best applied discretely, in hindsight, and with a profiler in hand.