postgresqlgoout-of-memorypgx

Do prepared statements in PostgreSQL respect shared_buffers memory limits?


Recently I observed OOMs in a PostgreSQL 16 server used by an application which made heavy use of prepared statements. Memory would gradually grow until hitting an OOM.

How to limit the usage of memory? I expected shared_buffers and similar limits to avoid such an occurrence.

We're using the pgx driver (v5.7.2) for Go.


Solution

  • According to BUG #14726: Memory consumption of PreparedStatement, filed against PostgreSQL 9.5.7, there is no way to limit the memory usage of PreparedStatement. A sufficiently large query, multiplied by a few times, may exhaust all your memory.

    According to the lone answer to that report, applications have to consider memory limits in their usage of prepared statements, but it's not possible to determine what the memory footprint of prepared statements is.

    I'd be against that, as it pretty much would destroy the point of having prepared statements at all --- if the server forgets them at random, or even just has to re-prepare them unexpectedly, it's not holding up its end of the contract. It's the application's job to use that resource in a prudent manner, just like any other SQL resource.

    With pgx, it was enough to use QueryExecModeSimpleProtocol to avoid the problem:

    
    config, cfgErr := pgx.ParseConfig(cfg.ConnString)
    config.DefaultQueryExecMode = pgx.QueryExecModeSimpleProtocol
    sqldb := stdlib.OpenDB(*config)