sqlpostgresqlgo

What's the maximum number OFFSET and LIMIT can accept in PostgreSQL


Can OFFSET and LIMIT accept BIGINT size integer or just INTEGER? (up to 2147483647 or 9223372036854775807)

I couldn't find any mention in postgresql documentation

I thought about this because user can send offset and limit parameters with a request and i am confused, whether should i accept int64 or int32


Solution

  • It does accept bigint, not just int so you need GO's int64. PostgreSQL sometimes calls bigint an int8 but that's in bytes - times 8 bits per byte gives you a 64-bit integer.

    The doc on select..limit does keep quiet about it but it's easy enough to find out:
    demo at db<>fiddle

    create table t(c)as values (1),(2);
    select*from t limit 32767--upper smallint limit 
                  offset 32767;
    select*from t limit 2147483647--upper integer limit 
                  offset 2147483647;
    select*from t limit 9223372036854775807--upper bigint limit
                  offset 9223372036854775807;
    

    All of these work fine and it's beyond bigint where it starts complaining:

    select*from t limit 9223372036854775808--right above upper bigint limit
                  offset 9223372036854775808;
    
    ERROR:  bigint out of range
    

    Limit above your number of rows does nothing, and offset beyond what you have just produces an empty structure.
    What's more worrying is a client not using limit and offset, just requesting as much as db can offer. Offsetting through the majority of what the db has to process is just wasteful. The doc warns about over- and undershooting your limits:

    The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give for LIMIT and OFFSET. Thus, using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless ORDER BY is used to constrain the order.

    The rows skipped by an OFFSET clause still have to be computed inside the server; therefore a large OFFSET might be inefficient.

    Postgres is capped at about 4e9 pages per table, and a table without any columns would need that much to store around 1e12 rows. That means the highest amount of rows in a single table is at one-millionth of how much limit and offset allow you to skip. You'd have to apply some very liberal joins, unions, partitioning and inheritance to run out of that and require numeric-sized skips, at which point I think you have bigger problems to deal with.

    If you're using limit..offset for pagination, you could consider alternatives. Keyset pagination (sometimes confusingly called cursor pagination even though it doesn't use a SQL cursor) lets you save all the compute you'd normally spend calculating a large set of rows, just so that limit..offset can discard a bunch in the front and back. With keyset pagination you can instead jump right to the page you want.


    Whenever the documentation fails, you can look at the source. The grammar only holds a clue in a comment:

    We need FCONST as well as ICONST because values that don't fit in the platform's "long", but do fit in bigint, should still be accepted here. (This is possible in 64-bit Windows as well as all 32-bit builds.)

    The Query struct is a bit more open about it (int8 expr):

        Node       *limitOffset;    /* # of result tuples to skip (int8 expr) */
        Node       *limitCount;     /* # of result tuples to return (int8 expr) */
    

    Further:

    node->offset = DatumGetInt64(val);