I was working on some tests with dictionaries, to do this I print out the Float64 values I want from a database in a format to copy and paste them into my test struct array, but when my tests failed I noticed that the values differ, but only by 0.0000000000002
Then, to check the value I wrote the following in a loop:
fmt.Printf("%f\n",value)
fmt.Println(value)
And I got the following values back:
702.200000
702.1999999999998
5683.090000
5683.089999999998
975.300000
975.3
I checked the docs and saw nothing that indicates that there's special notation for Float64 or that %f will replace Float64 for a Float32, however, I don't get the problem when I use %v
or %g
and the docs specify that they'll use %f
when appropriate.
I don't get the problem when I specify a precision of 12 using %.12f
either, but no default precision is specified in the docs.
Why does this happen?
EDIT: This is the same issue as the duplicate, but I believe Adrien's explanation about it is more detailed.
The default precision for %e, %f and %#g is 6; for %g it is the smallest number of digits necessary to identify the value uniquely.