I am working with QuestDB 8.1.1 on a ubuntu box for an application with around 4000 channels at 30hz where strong ingress rate is important. To minimize out of order issues and to be able to query fast each independent channel, I am creating a table for each, with ~4000 tables right now.
I'm seeing that questdb allocates 16MB for each channel file in the most recently created table directory inside qdb_root/db/<table_name>~9/
This makes the size on disk much larger than anticipated and ~10 minutes after my batch upload, each channel file gets reallocated to much smaller than 16MB.
I would like not to wait until I see the file size get reappraised - is there a way i can trigger this reappraisal? Looking to understand this behaviour better and for general feedback on the approach.
QuestDB preallocates some disk and memory pages in fixed-size chunks to avoid reallocations. This works generally well for fewer tables, but when you have a lot of them it can add up in terms both of disk and memory allocation.
There are a few config options you can tweak:
cairo.wal.writer.data.append.page.size=128K
cairo.writer.data.append.page.size=128K
cairo.o3.column.memory.size=128K
cairo.writer.data.index.key.append.page.size=128K
cairo.writer.data.index.value.append.page.size=128K
While there will be still be periodical consolidation of the files to use only the size needed for the actual data, those values mean the initial size will be smaller and will be extended, if needed, on demand.
You can get a few more details about each column and which are the default values at QuestDB config reference, but those values are a good default for users with thousands of tables, and are often recommended by the core team on slack. You might want to start with those and fine tune as you observe how it works for you.