I tried to use the following program to estimate the machine epsilon with a simple C program
#include <stdio.h>
#include <limits.h>
#include <float.h>
int main(){
float f = 1.0f;
float prev_f = 1.0f;
while( 1 + f != 1 ){
prev_f = f;
f /= 2;
}
prev_f *= 2;
printf("Calculated epsilon for float is %.10g", prev_f);
printf("The actual value is %.10f", FLT_EPSILON);
return 0;
}
where my output is
Calculated epsilon for float is 2.802596929e-45
The actual value is 0.0000001192
Can anyone explain this deviation to me? Is it architecture specific? Compiler dependent? Am i doing something wrong?
EDIT: The problem seemed to be due to me using the -Ofast optimization in gcc. Using -O instead solves the problem.
First, remove prev_f *= 2;
. Since the loop remembers the value of f
for which 1 + f != 1
failed by storing it in prev_f
, the value of prev_f
when the loop ends is the last value that caused 1+f
not to equal 1, which is the result you want.1
Second, C implementations are allowed to evaluate floating-point expressions with more precision than their nominal types. It appears your C implementation is effectively evaluating 1+f != 1
with infinite precision (which is possible by recognizing at compile time that 1+f != 1
evaluated to infinite precision is true iff f != 0
, so optimization can change it to f != 0
). Thus, the loop terminates only when f
becomes zero, at which point the previous f
was the smallest possible representable positive value, which, for the format commonly used for float
is 2โ149. (And your current code doubles this and prints 2โ148.) Note that the (effective) infinite precision occurs only in evaluating 1 + f != 1
, not in the assignment f /= 2;
. This is because the C standard requires implementations to โdiscardโ excess precision when casts and assignments are performed. This gives us the solution to this problem: Change 1 + f != 1
to (float) (1 + f) != 1
. This will force the evaluation to be done as if in the float
format.2
1 Sort of. The machine episilon is sometimes incorrectly stated to be the smallest value x such that evaluating 1+x produces a value greater than x. However, it is defined to be the different between 1 and the next greater representable value, say 1+๐. If we let x be slightly greater than ยฝ๐ (such as ยฝ๐(1+๐)), then eavluating 1+x would produce 1+๐ due to rounding, even though x is less than the machine epsilon. However, this code will find the correct value provided the issues described above are fixed and the floating-point radix is two, since it never tests one of these false candidates for the machine epsilon and therefore never finds one.
2 Generally, double-rounding problems can occur: When the C implementation uses excess precision to evaluate an expression, it may round the ideal mathematical result to that excess precision. When it is forced to โdiscardโ the excess precision by a cast or assignment, it rounds to the nominal precision. It is possible for these two roundings to cause a different result than if there were only one rounding to the nominal precision. However, that will not be a problem in this particular case.