I have a calculation that may result in very, very large numbers, that won fit into a float64. I thought about using np.longdouble but that may not be large enough either.
I'm not so interested in precision (just 8 digits would do for me). It's the decimal part that won't fit. And I need to have an array of those.
Is there a way to represent / hold an unlimited size number, say, only limited by the available memory? Or if not, what is the absolute max value I can place in an numpy array?
Can you rework the calculation so it works with the logarithms of the numbers instead?
That's pretty much how the built-in floats work in any case...
You would only convert the number back to linear for display, at which point you'd separate the integer and fractional parts; the fractional part gets exponentiated as normal to give the 8 digits of precision, and the integer part goes into the "×10ⁿ" or "×eⁿ" or "×2ⁿ" part of the output (depending on what base logarithm you use).