I was wondering why precision problems in floating point numbers are different with different values:
#include <iostream>
#include <iomanip>
int main ()
{
std::cout << std::setprecision(20);
double d1(1.0);
std::cout << d1 << std::endl;
double d2(0.1);
std::cout << d2 << std::endl;
return 0;
}
The output of this program is:
If both the numbers are of type double (that generally have precision problems), why compiler doesn't find any problem with value 1.0 and does find a problem with value 0.1. One more thing that it not clear to me is that if precision is set to 20 digits, why do I get a number that contains 21 digits as the result of d2?
Your computer uses a representation of floating point in which 1.0
can be stored exactly, but 0.1
can't. This is probably IEC 60559.
Leading zeroes are not considered part of the precision (they are just placeholders); your output does actually have 20 digits not counting the 0.
on the start.