binarybyteoctal

Is this definition on an octal byte correct?


My instructor stated that "an octal byte consists of 6 bits". I am having difficulty understanding why this is, as an octal digit consists of 3 binary bits. I also do not understand the significance of an octal byte being defined as '6 bits' as opposed to some other number.

Can anyone explain why this is, if it is in fact true, or point me to a useful explanation?


Solution

  • This is all speculation and guesswork, since none of this is in any way standard terminology.

    An 8-bit byte can be written as two digits of hexadecimals, because each digit expresses 4 bits. The largest such byte value is 0xFF.

    By analogy, two digits of octals can express 2 × 3 = 6 bits. Its largest value is 077. So if you like you can call a pair of two octals an "octal byte", but only if you will also call an 8-bit byte a "hexadecimal byte".

    In my personal opinion neither notion is helpful or useful, and you'd be best of just to say how many bits your byte has.