I've noticed during CI that this code
#include <stdio.h>
#include <math.h>
int main()
{
const double d = 3.81219767052986080458;
fprintf(stderr, "%.20f\n", sin(d));
return 0;
}
compilled with gcc test.c -lm -o test
returns a slightly different results for my 2 test platforms.
It's -0.62146009873891927544
witn gcc version 13.2.0 libc 2.39-0ubuntu8.3 Kubuntu 24.04
and -0.62146009873891938646
with gcc version 12.2.0 libc 2.36-9+deb12u7+ci202405171200+astra5+b1 Astra Linux 1.8
I wonder what affects it? libm? gcc? may be CPU?
This slight difference may be accumulated during integration in a tight loops.
Is there a simple way to get rid of it without performance penalties? A gcc compiler param, for example?
Such investigations, shown with hexadecimal of %a
can shed light:
int main() {
double x;
x = 3.81219767052986080458;
printf("x: % -25a % .21g\n", x, x);
double y = sin(x);
printf("sin(x): % -25a % .21g\n", y, y);
long double yl = sinl(x);
printf("sinl(x): % -25La % .21Lg\n", yl, yl);
y = -0.62146009873891927544;
printf("13.2.0 : % -25a % .21g\n", y, y);
y = -0.62146009873891938646;
printf("12.2.0: % -25a % .21g\n", y, y);
return 0;
}
Output:
x: 0x1.e7f617e06814dp+1 3.81219767052986080458
sin(x): -0x1.3e30049fb486ap-1 -0.621460098738919386463
sinl(x): -0x1.3e30049fb48697f8p-1 -0.621460098738919330735
13.2.0 : -0x1.3e30049fb4869p-1 -0.621460098738919275441
12.2.0: -0x1.3e30049fb486ap-1 -0.621460098738919386463
From this we can see
3.81219767052986080458;
results in an double
that is very close. Code values beyond 15 significant decimals places are suspect. This x
is quite close - good to 21 places.With sin()
of the 2 systems, including my own, the answers are within 1 unit in the last place (ULP) of each other.
Using long double
math, we get a sinl()
of -0x1.3e30049fb4869_7f8p-1. Note that 7f8 is very nearly half way between the lower precision sin()
results of the 2 systems report by OP. -0x1.3e30049fb4869p-1 is about 4 parts in 100 closer and so the better answer.
I wonder what affects it? libm? gcc? may be CPU?
Many things may affect this including the run-time math (Report value of FLT_EVAL_METHOFD), optimization settings, library used and even the CPU.
Is there a simple way to get rid of it without performance penalties? A gcc compiler param, for example
No.
--
[More]
Although only a few lines long there are 3 places where inexactness may arrive.
double d = 3.81219767052986080458;
The 21-digit decimal constant is converted to a double
. Although the conversion from decimal FP constant to double
is not exact, it is close enough here and not a noticeable affect on the final result.x before: 0x1.e7f617e06814_cp+1 3.81219767052986_036049...
x: 0x1.e7f617e06814_dp+1 3.81219767052986_0804577992894337512552738189697265625
code: 3.81219767052986_080458
x after: 0x1.e7f617e06814_ep+1 3.81219767052986_124867...
sin(d)
Sine(x) is a transcendental function. Even when x
is exact, the result is nearly always not a rational number (and so not representable exactly as a double
). A good double sin(double)
will return a result within about 1 ULP of the correct answer. Very good double sin(double)
will return a result within nearly 0.5 ULP. For common double
, we can see the 2 answers OP found are both nearly 0.5 ULP away from a more precise answer: one above, one below.// x from above.
sin(x) 13.2.0 : -0x1.3e30049fb4869000p-1 -0.621460098738919_275441...
sinl(x): -0x1.3e30049fb48697f8p-1 -0.621460098738919_330735...
sin(x) 12.2.0: -0x1.3e30049fb486a000p-1 -0.621460098738919_386463...
I suspect this is due to a better library function in 13.2.0. Yet even if this one example was a tad worse, the newer library may be better on the average.
fprintf(stderr, "%.20f\n", sin(d));
the conversion of -0x1.3e30049fb4869000p-1
and -0x1.3e30049fb486a000p-1
both print a rounded result. Yet since enough precision is requested and the conversion was done well, the rounded ouptut is not in error more than expected. Still I find using the "%a"
output more informative.This slight difference may be accumulated during integration in a tight loops.
The slight difference OP found though is the 1-bit difference in implementations and not the math difference in between sin(x)
and the math function sine(x).
An implementation difference in sin(x)
can highlight consistency issues - which OP found. Yet it is that math difference OP should be concerned - and not addressed. That takes a larger analysis and chunk of code to assess.
If the sin()
implementation difference generates real result problems (other than exact consistency) which I doubt (would need to see the larger app), then OP's code likely needs higher precision types.
I strongly suspect either result is close enough for OP's needs. @Igor Tandetnik