Is it possible to run PostgreSQL 11's VACUUM FULL
for a short while and then get some benefit? Or does cancelling it midway cause all of its progress to be lost?
I've read about pg_repack
(https://aws.amazon.com/blogs/database/remove-bloat-from-amazon-aurora-and-rds-for-postgresql-with-pg_repack/) but the way it works (creating new tables, copying data, etc.) sounds risky to me. Is that my paranoia or is it safe to use on a production database?
Backstory: I am working with a very large production database on AWS Aurora PostgreSQL 11. Many of the tables had tens of millions of records but have been pruned down significantly. The problem is that the table sizes on disk (and in the snapshots) have not decreased because DELETE
and VACUUM
(without FULL
) do not shrink the files. These tables are in the hundreds of gigabytes range and I'm afraid running VACUUM FULL
will take forever.
No. VACUUM FULL
writes a new physical file for the table. Stopping it before it finishes voids the work done so far.
VACUUM FULL
rewrites the entire contents of the table into a new disk file with no extra space, allowing unused space to be returned to the operating system. This form is much slower and requires anACCESS EXCLUSIVE
lock on each table while it is being processed.
This is the main reason why community tools like pg_repack or pg_squeeze were created, which are more flexible, less blocking, and often faster, too. (I don't think pg_squeeze is available for Aurora, yet).