At this moment of Jason Turner's 2016 CppCon talk "Practical Performance Practices", he mentions that full constexpr enabling of every data structure that can be (I'm guessing that means making every field and function constexpr that can be) can result in bigger code "because this causes more data structures to be compiled into your code so you have more data in the data segment than something that would be calculated at runtime" (this quote is kind of a combination of what he actually said at this time stamp and what he said at the end as an answer to a question about this topic).
I don't really understand what that means. Why would constexpr data structures compile to be bigger than non-constexpr data structures? Does anyone have an actual example that shows this?
When implementing a 7-bit cyclic redundancy check (CRC) algorithm on a microcontroller, I find it handy to build a 256-byte lookup table ahead of time, with some code like this:
uint8_t crc_table[256];
for (unsigned int i = 0; i < 256; i++)
{
crc_table[i] = some_crc_function(i);
}
So if you turn crc_table
into a constexpr
thing that gets computed at compile time, your toolchain would have to store a 256-byte table in your executable, which takes up space. It would also be able to remove the code for generating the CRC table, but if the machine instructions for that code take less than 256 bytes, then I'd expect the executable to get bigger.