I'm studying memory mapping operations for sparse files. I noticed that once a write operation is issued, it occupies a minimum of 64KB of space. When I write to the next 64KB region in the logical address, the file size increases by 64KB, making the total file size 128KB. I'm confused about this behavior because even if I write just 1 byte of data to both logical addresses 0 and 65536, the sparse file still occupies 128KB of space.
My questions are: Is this the correct way to use sparse files? Is the 64KB growth a setting in Windows? Additionally, if I first perform a write operation to a high logical address (for example, writing to the 512KB position) and then write to the 0KB logical address, will the file report a size of 512KB or will it occupy 128KB as two 64KB blocks?
Here is my test code:
#include <iostream>
#include <windows.h>
#include <fileapi.h>
using namespace std;
const char* FILENAME = "sparse_test.txt";
HANDLE hFile = nullptr;
void writeSparse(BYTE* data, int n, int offset) {
for (size_t i = 0; i < 1024 * n; i += offset) {
data[i] = 1;
cout << "Wrote byte " << i / 1024 + 1 << " at position " << i << "\n";
}
}
void writeSequential(BYTE* data, int n, int offset) {
for (size_t i = 0; i < 1024 * n; i += offset) {
memset(data + i, '2', offset);
cout << "Wrote byte 2 at position " << offset * i << "-" << (i + 1) * offset << "\n";
}
}
int main(int argc, char* argv[]) {
if (argc != 4) {
cerr << "Usage: " << argv[0] << " <mode 1 or 2>\n";
return 1;
}
hFile = CreateFileA(FILENAME, GENERIC_READ | GENERIC_WRITE, 0, NULL,
CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL);
DWORD dwTemp;
BOOL bSparse = DeviceIoControl(hFile, FSCTL_SET_SPARSE, NULL, 0, NULL, 0, &dwTemp, NULL);
HANDLE hMapping = CreateFileMappingA(hFile, NULL, PAGE_READWRITE, 0, 1024 * 1024, NULL);
BYTE* data = (BYTE*)MapViewOfFile(hMapping, FILE_MAP_ALL_ACCESS, 0, 0, 0);
if (data == NULL) {
cerr << "Mapping failed: " << GetLastError() << "\n";
return 1;
}
(stoi(argv[1]) == 1) ? writeSparse(data, stoi(argv[2]), stoi(argv[3]) * 1024) : writeSequential(data, stoi(argv[2]), stoi(argv[3]) * 1024);
UnmapViewOfFile(data);
CloseHandle(hMapping);
CloseHandle(hFile);
return 0;
}
How to Use This Code:
Compile the code into an executable, e.g., test.exe.
Run the executable with three arguments:
The first argument is the mode: 1 for scattered writes (writing the first byte in each designated interval) or 2 for sequential writes (writing all bytes in the specified jump interval).
The second argument represents the maximum amount of data to be written, in kilobytes (KB).
The third argument is the offset for each write operation, in kilobytes (KB).
For example:
For scattered writing: test.exe 1 128 4 means to write the first byte in each jump interval, with a jump of 4KB, up to a maximum of 128KB.
For sequential writing: test.exe 2 128 4 means to write all bytes in each jump interval, with a jump of 4KB.
Memory-mapped IO can only track usage of memory at a page-size granularity (typically 4 kiB). The OS cannot intercept every single byte IO. The only things it traces are the first time you read or write to a page at all. This then causes the OS to load existing data from disk if present or allocate a new zero-filled page. For writing it also marks the page as dirty so that the changes are flushed to disk later.
Windows uses a larger 64 kiB granularity due to historic reasons, as far as I understand it. It's explained here by Raymond Chen.
Another reason may be the allocation granularity of the file system. Many file systems can be formatted with different block / cluster sizes.
All of these things are platform-dependent and subject to change. For example ARM can use up to 64 kiB pages by default with some potential benefits.