I have been working with the binary descriptors that are implemented in OpenCV. But the descriptors are formed with integer values (shown in red rectangle).
Why are they formed with integer values?
OpenCV uses a byte-array to represent a BRIEF descriptor. Thus, every value in the highlighted container is in [0, 255] range, and actually corresponds to 8 bits of the descriptor. This is mentioned in the documentation here:
[Descriptor dimension] can be 128, 256 or 512. OpenCV supports all of these, but by default, it would be 256 (OpenCV represents it in bytes. So the values will be 16, 32 and 64).
Here is a stellar explanation for why representing the descriptor as a binary array would be less efficient.