pythontensorflowtfrecord

Why does tf.train.FloatList have rounding errors?


The following code shows that when converting a python float to a tf.train.FloatList, then we lose precision. My understanding was that both native python and tensorflow store it as float64. So why the difference?

import tensorflow as tf

x =2.3
lst = tf.train.FloatList(value=[x])
reloaded = lst.value[0]  # 2.299999952316284

Solution

  • A FloatList contains floats - as in, protocol buffer float, which is 32-bit. If you look at the FloatList documentation, you'll see that value is defined as

    repeated float value
    

    which means the value field contains 0 or more 32-bit protobuf float values.

    If it was 64-bit floats, it would say

    repeated double value
    

    but it doesn't say that.