I have a table of values of hex values (I think they are hex bytes?) and I'm trying to figure out what they mean in decimal.
On the website the author states that the highlighted values 43 7A 00 00 mean 250 in decimal, but when I input these into a hex to decimal converter I get 1132068864.
For the life of me I don't understand what's going on. I know that the naming above the highlighted values 1 2 3 4 5 6 7 8 9 A B C D E F are the hex system, but I don't understand how you read the values inside the table.
Help would be appreciated!
What's happening here is that the bytes 43 7A 00 00 are not being treated as an integer. They are being treated as an IEEE-format 32-bit floating-point number. This is why the Type
column in the Inspector
window in the image says Float
. When those bytes are interpreted in that way they do indeed represent the value 250.0.
You can read about the details of the format at https://en.wikipedia.org/wiki/Single-precision_floating-point_format
In this particular case the bytes would be decoded as:
0
, meaning that the value is a positive number1000 0110
(or hex 86, decimal 134), meaning that the exponent has the value 7 (calculated by subtracting 127 from the raw value of the field, 134)1111 1010 0000 0000 0000 0000
(or hex FA0000, decimal 16384000)The significand and exponent are combined according to the formula:
value = ( significand * (2 ^ exponent) ) / (2 ^ 23)
where a ^ b means "a raised to the power b" .
In this case that gives:
value = ( 16384000 * 2^7 ) / 2^23
= ( 16384000 * 128 ) / 8388608
= 250.0