CPython implements arbitrary-precision integers as PyLongObject, which extends PyObject. On 64-bit platforms they take at least 28 bytes, which is quite memory intensive. Also well-known is that it keeps a cache of small integer objects from -5 to 256.
I am interested in seeing how PyPy implements these, in particular what optimizations there are optimizations for limited size integer objects. It is difficult to find documentation online. The PyPy docs mention a tagged pointer optimization for "small ints" of 63 bits (signed?). The most obvious to me is treating an integer as a primitive instead of a general purpose object if possible.
The PyPy docs mention a tagged pointer optimization as something that you need to enable explicitly, and it's never enabled by default because it comes with a large performance cost.
Instead, the story is:
There are two different internal representations, one for 64-bit signed integers and one for larger integers.
The common, small representation is a "PyObject"-like object, but only 2x8 bytes in total (including headers etc.). (The reason is that our production GC on 64-bit machines adds only a single 64-bit header, which packs a number to identify the RPython class and a number of GC flags; and then the W_IntObject
class only adds one 64-bit field for the integer value.)
There is a separate optimization for "list of small integers", which is implemented as an array of 64-bit ints instead of an array of objects (so 8 bytes per item instead of 16-plus-8-for-the-pointer, plus increased locality).