I was wondering what the difference is between converting an integer to a double using double
versus multiplying the integer by 1.0
int a = 4
int b = 3
double c = 1.0 * a / b
double d = (double) a / b
ultimately both c
and d
output 1.333..
From what I understand c
is converted to a double because 1.0 * a
makes a
a double, and when dividing using a float and an integer the output will always be a float, but in a hypothetical situation where you just want to convert an integer to a double, what's the difference between the two methods?
1: Converting an int to a double by multiplying by 1.0 or adding 1d? (incomprehensible)
2: Cast with (double) or " *1.0 " (asked for best practices)
Use the cast. When writing code, correctness and maintainability are both important. While both choices may give you the correct answer, only one is obviously "i want to cast an int to a double". The other is "why is this number being multiplied by 1.0? this looks useless. oh, it's an int, so this makes it a double", which is useless cognitive load that you impose on the future reader of the code (which might be you in 3 months wondering what 3 months younger you was thinking when they wrote that).