I write a cross-platform program for Windows and Linux, and I would like it to behave as similar on both platforms as possible. I use some mathematics in the program, e.g. std::atan2 function calls, and I observed that for the same input values it produces sometimes the result diverging in the unit of least precision depending on the platform or optimization settings.
Consider this reduced example:
#include <print>
#include <cmath>
#include <cfenv>
int main() {
// 0 0.14189707 3e114d77 with GCC on Linux
// 0 0.14189705 3e114d76 with MSVC (and compile-time evaluation in GCC)
std::print( "{} {} {:x}", std::fegetround(), std::atan2( 1.f, 7.f ),
std::bit_cast<uint32_t>( std::atan2( 1.f, 7.f ) ) );
}
Online demo. Although the difference between results is small, it is amplified by the following computations in my program, so I would like to avoid the difference completely.
Are implementations allowed diverging in that way provided the same rounding mode? Are there compiler flags to eliminate or minimize the divergence?
Are implementations allowed to diverge ...
Yes. A lot of the transcendental math functions are very loosely specified in the standard, and often don't even specify the level of precision required. There are no guarantees that implementations give the same answer, and what you get depends on the standard library implementation you are using.
Very precise answers are also often very slow answers, so most implementations find some balance of speed/precision that meets common use cases.
It's quite common for applications that care about consistent results across platforms, or that need to achieve a specific level of precision, to link their own maths library so they can guarantee what they get. Libraries like Sleef are quite popular for this.