javaspring-bootdoublebigdecimalexponential

Java double converts more than 4 decimal points to exponential format and stores the same exponential format in the DB


I have an entity class(Account) wherein I have the following property

@Column(name = "balance", precision = 10, scale = 10, nullable = false)

Double balance;

Then in one of my spring services, I'm doing the subtraction of two (double) numbers as follows

Account account = accountRepository.findById(1);
Double result = 0.0;
Double amount = 9.99999999;
BigDecimal bdResult = subtract(account.getBalance, amount);
result = bdResult.setScale(8, RoundingMode.HALF_EVEN).doubleValue();
account.setBalance(result); //It sets the value in exponential format
accountRepository.saveAndFlush(account); //Also in the DB it store in exponential form

static BigDecimal substract(Double balance, Double amount) {
    BigDecimal bd1 = BigDecimal.valueOf(balance);
    BigDecimal bd2 = BigDecimal.valueOf(amount);
    BigDecimal result = bd1.setScale(8, RoundingMode.HALF_EVEN).subtract(bd2.setScale(8, 
    RoundingMode.HALF_EVEN));
    return new BigDecimal(result.toPlainString().trim());
}

Here's the entity(DB) properties, FYI I'm using Oracle SQL enter image description here

So when data is persisted into the DB the value is stored in exponential form. I want the value to be stored in a precise format i.e. in non-exponential form. For e.g. consider

double balance = 10.0 //account.getBalance();
double amount =  9.99999999;
//After calling AmountUtility.subtract(balance, amount) the result should be as follows
double result = 0.00000001

I want to ensure the same result format is also persisted in the table as I must follow 8-digit decimal precision.

I have been working on finding a solution for it for days but still haven't found it. Would highly appreciate it if someone could help me here.


Solution

  • //After calling AmountUtility.subtract(balance, amount) the result should be as follows
    double result = 0.00000001
    

    This shows a serious misunderstanding. Because 0.00000001 is not a value that double could possibly represent. It simply can't. Mathematically.

    Take out a piece of paper. Write down the result, in decimal, of dividing 1 by 3.

    You'll find that this job is impossible. There's not enough room. You start writing 0.3333333 and never get it quite right.

    The reason you can do the job when I ask you to write 1 divided by 10, is because in a decimal system, if the divisor can be broken down solely into factors of 2 and 5, it 'works' - you don't get an endless sequence of digits. And 10 is trivially broken down into 2 * 5, of course.

    Computers count in binary. For them, only divisors that can break down into just factors of 2, 'work'. A double can exactly represent 0.5, and even 0.125 (that's 1 / 8, 8 is 2*2*2 and therefore passes muster).

    But 0.00000001? No. That is impossible to accurately convert to a double for the same reason it is impossible to write 1/3 on a piece of paper in decimal. That is not a number whose divisor is an exponentiation of 2.

    So, why does double d = 0.1; even compile?

    Because the rules state that all operations in float/double space all silently round to the nearest representable double at every step continuously throughout all math.

    We can see this blow up in our face real easily:

    double v = 0;
    for (int i = 0; i < 10; i++) v += 0.1;
    System.out.println(v == 1.0);
    

    Run the above and realize how broken this all gets. Because that prints, bizarrely, false! - according to computers, evidently ten times a tenth somehow isn't equal to one.

    It's not equal to 1 for the same reason this isn't:

    They correctly say: No. It's close, but it is not equal. That's exactly what the system is doing. Make that 0.125 and it does work fine.

    That 'silently round to the nearest double' stuff is inherent, cannot be disabled, cannot be controlled, and cannot be queried (you can't ask how big the error was for the previous operation, for example).

    So, DO NOT use double for finance stuff!

    Yes, bad news. You have to do some serious refactoring work on your codebase. Most of it has to be reviewed.

    The two usual alternate paths involve atomicity and BigDecimal.

    Atomicity

    Atomicity is the simple solution. Store everything as atomic units. So, store long priceInCents = 420; instead of double price = 4.20;. This 'works' because the vast majority of interactions with external stuff, including not just banks and other applications but also human inputs and human eyeballs, either does not support, or at least strongly prefers not to have to deal with, fractional cents. Instead, these interactions usually dictate a rounding behaviour, and it's just simpler to apply those rules as you run into them, instead of attempting to keep an endless (and endlessly eating more memory!) log of fractional stuff and only rounding when you have to interact with a system that requires it, because at that point you can only go with a global rule ('we round down' / 'we round up' / oh, wait, what do we do with exactly half-a-cent again?).

    All currencies have an atomic. Yen has Yen. Bitcoin has Satoshi. Euros and dollars have cents. It's usually, but not always, one hundreth of the standard unit.

    BigDecimal

    That's the one you pick if you must have perfect accuracy. But, because it is perfect, it takes as much memory as it needs (which can be a lot), is hard to use and slow, and just outright crashes if you ask it to divide by 3, because computers are finite pieces of paper after all. You can solve that crash by telling it how to round, but then you're still rounding. If you're going to round you might as well round on cents and just use a long instead.

    Don't use BD unless you really know what you are doing.

    Now that we've dealt with that