Here is my code
let ns = NumberFormatter.init()
ns.allowsFloats = true
ns.maximumFractionDigits = 18 //This is a variable value
ns.minimumFractionDigits = 18 //This is a variable value
ns.roundingMode = .floor
ns.numberStyle = .decimal
let doubleValueOfDecimal : Double = 12.95699999999998
let numb = NSNumber.init(value: doubleValueOfDecimal)
print(numb)
let string = ns.string(from: numb)
print(string)
The following is the output and input
doubleValueOfDecimal = 2.95699999999998
Output
2.95699999999998
Optional("2.956999999999980000")
But if I input
doubleValueOfDecimal = 12.95699999999998
The output is
12.95699999999998
Optional("12.957000000000000000")
The string conversion rounds up the value. Can someone explain me how this works?
The string conversion is rounding up the decimal places when I want it to show the exact number.
You are falling down the cracks between the expectations of the behaviour of decimal numbers and the reality that Float
and Double
are binary floating-point, that is the fractional part of decimal numbers are sums of 1/10's, 1/100's etc. while for binary numbers it is sums of 1/2's, 1/4's etc. and some values exact in one are inexact in the other and vice-versa.
Change your code to include:
let doubleValueOfDecimal : Decimal = Decimal(string:"12.95699999999998")!
let numb = doubleValueOfDecimal as NSDecimalNumber
and the output is probably what you expect:
12.95699999999998
12.956999999999980000
The Decimal
type is a decimal floating-point value type, NSDecimalNumber
is a subclass of NSNumber
which holds a Decimal
value.
HTH
(Note: you have to initialise the Decimal
from a string as using a numeric literal appears to involve the Swift compiler using binary floating point at some point in the process...)