I was experimenting with this code block with different versions of .NET:
using System;
using System.Linq;
public class Program
{
public static void Main()
{
Decimal decimalVal = 9.38M;
Double doubleVal = 9.38;
Single singleVal = 9.38F;
Int32 i = 100;
Console.WriteLine(decimalVal*i); // "938.00"
Console.WriteLine(doubleVal*i); // .NET Core+: "938.0000000000001". .NET Framework 4.7.2: "938"
Console.WriteLine(singleVal*i); // "938"
var doubleResult = doubleVal*i;
var doubleBytes = BitConverter.GetBytes(doubleResult);
var str = string.Join(" ", doubleBytes.Select(x => Convert.ToString(x, 2).PadLeft(8, '0')));
Console.WriteLine(str);
// Same value for .NET Core+ and .NET Framework 4.7.2:
// 00000001 00000000 00000000 00000000 00000000 01010000 10001101 01000000
Console.WriteLine(BitConverter.IsLittleEndian); // "True", in case it's important to know?
}
}
Why does 9.38*100
produce an accurate value of 938
in .NET Framework 4.7.2, but in .NET Core 3.1 and up, it produces 938.0000000000001
?
I looked at the actual bits used to represent the numbers, and the bits are the same regardless of which .NET version is used.
Btw, the difference might have started earlier than .NET Core 3.1, but I've only tested what was available at https://dotnetfiddle.net.
This isn't a change in the result of the multiplication. It's a change in the default string formatting - and one which makes the resulting string a more accurate representation of the result.
Here's another version the code, this time using my DoubleConverter code to show the exact value of the double
in decimal form:
using System;
class Program
{
static void Main()
{
double initial = 9.38;
double actualResult = initial * 100;
double literal1 = 938.0000000000001;
double literal2 = 938;
Console.WriteLine($"Actual result ToString: {actualResult}");
Console.WriteLine($"Actual result exact value: {DoubleConverter.ToExactString(actualResult)}");
Console.WriteLine($"Equal to literal 938.0000000000001? {actualResult == literal1}");
Console.WriteLine($"Equal to literal 938? {actualResult == literal2}");
}
}
Results for .NET 8.0:
Actual result ToString: 938.0000000000001
Actual result exact value: 938.0000000000001136868377216160297393798828125
Equal to literal 938.0000000000001? True
Equal to literal 938? False
Results for .NET 4.8:
Actual result ToString: 938
Actual result exact value: 938.0000000000001136868377216160297393798828125
Equal to literal 938.0000000000001? True
Equal to literal 938? False
To my mind, the .NET 8.0 results are better - basically, if you take the output of ToString
, and use that as a double
literal in the code, you end up with the exact same value as the result of the multiplication. That's not true for .NET 4.8.