there's an aspect of Postgres memory allocation about large data object I don't understand.
In more details, statistics about memory allocation don't match with allocation calculated considering involved tables
PostgreSQL version: 12
I have 5 tables using lob fields in my db, implemented as oid and related large objects.
If I execute this query
select pg_database_size ('<db_name>')
dbms tells me that size is 2.8 gb
If I execute this query
SELECT
table_schema || '.' || table_name AS table_full_name,
pg_size_pretty(pg_total_relation_size('"' || table_schema || '"."' || table_name || '"')) AS size
FROM information_schema.tables
ORDER BY
pg_total_relation_size('"' || table_schema || '"."' || table_name || '"') DESC;
dbms tells me that large objects table is 2.6 gb large
Dry running vaccumlo, I can find which tables are using large objects, they are 5
Considering that large objects are stored in page of 2048 kb, I can estimate memory consumption of each table, that gives me
If I dry run vacuumlo, it says there are not orphans
So, how can I find how the 2.6 gb are allocated? How can I free them?
Ok guys, problem solved. I tried to execute vacuumdb with -f switch and it worked! Executing without -f remove orphan BUT keeps memory allocated for the table. If you want to release memory you have to speciffy that paramter. Doing so, we shrink db size from 2.8GB to 140 MB