postgresqlautovacuum

Increasing autovacuum_vacuum_cost_limit to prevent multiXactID depletion


I have a large PostgreSQL database that is used by a web server. The server often receives a steady load of requests (up to ~1000rps), where each request makes an insert of a single row into several tables. Since the inserted rows all reference the same row in another table, that produces a lot of locks and increments MultiXactID.

When MultiXactID is close to being exhausted, auto-vacuum should kick in and prevent the wraparound. In my case though, since one of the tables is massive (~5 billion rows) this autovacuum may run for days and not complete in time.

When this happens, all inserts fail with "multixact “members” limit exceeded (SQLSTATE 54000)".

Is increasing the autovacuum cost limit an effective way to address this situation? I'm thinking, if autovacuum runs faster, it's more likely to prevent wraparound in time.

I'm running PostgreSQL 15.12.


Solution

  • If autovacuum is too slow, you should speed it up:

    To tune autovacuum to be as fast as possible on a table, you run

    ALTER TABLE tab SET (autovacuum_vacuum_cost_delay = 0);
    

    Then autovacuum will run without any pause.

    In addition, you should give autovacuum enough memory so that it runs as fast as it can by raising maintenance_work_mem.

    If that is still not enough, perhaps it helps to start cleanup earlier. This is done by lowering autovacuum_multixact_freeze_max_age for that table, e.g.

    ALTER TABLE tab SET (autovacuum_multixact_freeze_max_age = 50000000);
    

    If you still don't get done in time with all these adjustments, the only solution is to partition the table. Then autovacuum can run on all partitions in parallel (you will probably need to increase autovacuum_max_workers), and it should get done in time.