pythonpython-3.xnumpy

Why integers do not have size limit in python 3?


In python 3, there is no limit on the value of integers, as mentioned in this post.

This does have an interesting side effect with numpy, where depending on the integer value, you might get an object or an integer.

np.array(int('9'*3)).dtype
# int64

np.array(int('9'*38)).dtype
# object

np.array(2**64-1).dtype
# uint64

Can someone explain to me why python 3 does not have a limit on integer size, and how do they do it (under the hood).


Solution

  • It used to be that there was an int type and a long type, and values were automatically promoted from int to long when they got too large (or too small) for int to hold them anymore. long used as many bytes as is necessary to hold the value at the cost of some performance.

    It was possible, however, to have a long that could have been an int, and there was special literal syntax for long values. That is, 0L was zero as a long, whereas 0 was zero as an int. These values were equal, but still of different types.

    In Python 3, these two types were unified. There is only one type, but its internal representation uses either a machine integer (32-bit or 64-bit depending on version of Python) or an arbitrary-length integer (like long) as needed.