What is the difference between decimal
, float
and double
in .NET?
When would someone use one of these?
float
(the C# alias for System.Single
) and double
(the C# alias for System.Double
) are floating binary point types. float
is 32-bit; double
is 64-bit. In other words, they represent a number like this:
10001.10010110011
The binary number and the location of the binary point are both encoded within the value.
decimal
(the C# alias for System.Decimal
) is a floating decimal point type. In other words, they represent a number like this:
12345.65789
Again, the number and the location of the decimal point are both encoded within the value – that's what makes decimal
still a floating point type instead of a fixed point type.
The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can't be exactly represented, for example.
As for what to use when:
For values which are "naturally exact decimals" it's good to use decimal
. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.
For values which are more artefacts of nature which can't really be measured exactly anyway, float
/double
are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.