I realized the other day that most common lisp had 128-bit "long-floats". As a result, the most positive long float is:
8.8080652584198167656 * 10^646456992
while the most positive double float is 1.7976931348623157 * 10^308
, which is pretty big already.
I wanted to know whether anyone had ever needed a number bigger than 1.7976931348623157 * 10^308
, and if so, in which condition?
Do you feel it is useful to have by default in a programming language?
Is the precision of a 64-bit double float not enough in some circumstances? I would love to hear use-cases.
Scientists use this kind of stuff - and occasionally arbitrarily sized integers/floats/decimals.
For you, 32-bit or 64-bit is usually enough.
See also: