I know this question is too common (or boring) with you. But, I don't understand why 127 is chosen as bias exponent, instead of 128.
As IEEE-754 standard, we have 8 bits for exponent part for floating-point numbers. As I known, with 8 bits of integers, we'll have a range:
OK, right now, in order to shift (or bias), normally we must to add 128 to signed integers to convert them to unsigned integers. For example,
But, the IEEE-754 standard select 127 as bias whereas as -128 + 127 = - 1 < 0. Is it logic ?
I clearly don't understand the reason behind. Therefore, I need your help and thank you for everyone.
why 127 is chosen as bias exponent, instead of 128.
The bias selection for binary32 is not concerned with signed to unsigned integer conversion.
The choice is based on desired range of the floating point encoding.
max 3.402...e+38
min normal 1.175...e-38
min subnormal 1.401...e-45
The bias selection is somewhat arbitrary. To achieve a balanced range, with the bias of 127, 1.0/max is non-zero (some subnormal exponentially near min normal) and 1.0/min_normal is <= max. These are nice properties. All 1.0/normal are within binary32 non-zero finite range.
With a bias of 128, those values would be half as much and we would lose that 1.0/min_normal is <= max property.