I have a simple C++ program that multiplies two long int
variables and prints the result. Here's the code:
#include <iostream>
using namespace std;
int main() {
long int a = 100000;
long int b = 100000;
long int c = a * b;
cout << c << endl;
return 0;
}
On online compilers, the output is as expected: 10000000000. However, when I run the same code on my computer, the output is 1410065408. I'm confused as to why this discrepancy is occurring.
Could someone please explain why this difference in output is happening? Is it related to compiler settings or some other factor? And how can I ensure consistent behavior across different environments?
The standard specifies long int
is at least 32 bit long. It very often is 64 bits but it could actually be 32. This is up to the compiler, and in your case, both produce different outputs as a result.
32-bit signed integers, overflows at 2147483648.
When overflows occurs for signed integers, the behavior is undefined but very often, you will see it wraps around (the possible behaviors are listed here).
In the end, your calculation which you expect to be 100000 * 100000
effectively becomes (100000 * 100000) % 2147483648
(but again, this is undefined behavior and something else could have occurred on another compiler independently from long
being 32 bits).
On the same page I linked above, you will learn that only long long int
is guaranteed to be 64-bits. There also exist fixed-width integers you can pick from, set to an explicit length for all platforms.
Final note: for the same reason your code produce different outputs on 2 compilers, the "correct" way to write the integer literals, especially if you use them directly rather than assign them to a variable of an explicitly declared type, is to use a suffix.
E.g.: 100000
is a int
, 100000L
is a long
and 100000LL
is a long long int
.
What I wrote above really should have been (100000LL * 100000LL) % 2147483648LL
.