I have the following code in my Fortran program, where both a and b are declared as REAL (KIND=8):
a = 0.12497443596150659d0
b = 1.0 + 0.00737 * a
This yields b as 1.0009210615647672
For comparison, the following code in Rust yields a slightly different value:
let a = 0.12497443596150659;
let b = 1.0_f64 + 0.00737_f64 * a;
Where b is now 1.0009210615930364
I understand that this is due to the literals in the Fortran code being interpreted as single precision instead of double precision, and this is supported by the fact that appending "d0" causes the Fortran output to perfectly match the output from my rust program.
What I don't understand is how I can replicate the Fortran program's behavior in Rust. I'm not very experienced with Fortran, but my understanding is that when an operation is performed between a single precision and a double precision value, regardless of the order, the single precision value is "promoted" to double precision to perform the calculation. If this is the case, then I would expect the original Fortran code to behave the same as the Rust code. I've tried emulating a few variations of what the Fortran compiler might be doing in my Rust code, but none seem to match.
(1.0_f32 + 0.00737_f32 * (a as f32)) as f64 yields 1.0009210109710693
(1.0_f32 + (0.00737_f64 * a) as f32) as f64 yields 1.0009210109710693
1.0_f64 + ((0.00737_f32 * (a as f32)) as f64) yields 1.0009210615535267
What's going on here? I've come to expect that the output of some floating point calculations won't match exactly from system to system, but in my experience a calculation like this should be much more consistent than what I'm seeing. Even the last Rust snippet, which is the closest to the Fortran result, is quite different.
I imagine that inspecting the compiled artifacts would give some insight, but unfortunately that approach exceeds my skill set.
I do not know Rust but I can probably guess what its syntax means. None of your attempts appear to do what the Fortran expression does.
The expression
1.0 + 0.00737 * a
is, assuming a is 64-bit and the default precision is 32-bit, evaluated by converting 0.00737 to 64-bit, doing the multiplication and adding 1. After the evaluation, the result is assigned.
So something like
(1.0_f32 as f64) + ((0.00737_f32 as f64) * a)
if I understand the Rust syntax correctly.
Testing this expression in the Rust playground gives me the same answer as your Fortran result from the question.
As discussed in the comments, I can add that 0.0737 is not exactly representable in binary and the 32-bit and 64-bit floating point values are inexact. However, I believe everyone here, including the OP of the question, already knew that. The point here is to show the correct expression evaluation rules.