c++binarybitnibble

Why does C++ break down into nibbles?


Why is information stored in sequences of four bits (nibbles)? Is there any particular reason that four bits were selected, over perhaps three bits, or five bits? I've just been wondering about this question, and I haven't found a definitive answer (if there is one) as to why we group bits in this manner.


Solution

  • The closest nibbles get to relevant is that a number in hex format has one digit per nibble... the reason hex is seen quite a bit in code is simply that it allows the common 8-bit width of bytes to be represented with exactly and just 2 hex digits, which is reasonably concise and not too hard for humans to get used to. It's easy enough to mentally convert back to binary, while not losing track of which digits you're looking at the way you can with a 32-bit or 64-bit value in binary.

    C++ bit fields allow structs to pack in arbitrary widths and positions, so you can create "nibbles" if you like, but in the likely case that the CPU lacks any special support for nibbles, or the C++ optimiser considers benefit from such instructions so infrequent that it doesn't bother to utilise them, the compiled C++ code will be bit-shifting and bitwise ORing and ANDing into/from the CPU-addressable units of memory (bytes or words) that hold them, just as it's likely to have to do for other unusual-width fields.

    A few CPUs have supported Binary Coded Decimal number representations where each decimal digit occupied a nibble, but that's not supported by the C++ Standard.