Possible Duplicate:
Why can't decimal numbers be represented exactly in binary?
I am developing a pretty simple algorithm for mathematics use under C++.
And I have a floating point variable named "step", each time I finish a while loop, I need step to be divided by 10.
So my code is kind of like this,
float step = 1;
while ( ... ){
//the codes
step /= 10;
}
In my stupid simple logic, that ends of well. step will be divided by 10, from 1 to 0.1, from 0.1 to 0.01.
But it didn't, instead something like 0.100000000001 appears. And I was like "What The Hell"
Can someone please help me with this. It's probably something about the data type itself that I don't fully understand. So if someone could explain further, it'll be appreciated.
It is a numerical issue. The Problem is that 1/10 is a endless long number in binary and the successive apply of a division by 10 ends up with summing the error in each step. To get a more stable version you should multiply the divisor. But take care: the result is also not exact! You may want to replace the float with a double to minimize the error.
unsigned int div = 1;
while(...)
{
double step = 1.0 / (double)div;
....
div *= 10;
}