I've got a table (10k rows) that stores large values in a text
column. The current largest is 417 MB uncompressed (85 MB toasted). The flaw in this design is that it's not possible to stream these values (e.g. over JDBC) - anything using this column must read the whole thing into memory.
Are there any tools or shortcuts available to migrate this column to large objects? Minimising the working disk and memory required.
I'll be using lo_compat_privileges
if that make any difference.
Why not just use lo_from_bytea
?
Example:
SELECT 'test'::text::bytea;
bytea
------------
\x74657374
(1 row)
SELECT lo_from_bytea(0, 'test'::text::bytea);
lo_from_bytea
---------------
274052
(1 row)
SELECT lo_get(274052);
lo_get
------------
\x74657374
(1 row)
So, to actually move (you better have a backup) the data from text to OID's, you can do the following:
ALTER TABLE mytable ADD COLUMN value_lo OID;
UPDATE mytable SET value_lo = lo_from_bytea(0, value::bytea), value = NULL;
ALTER TABLE mytable DROP COLUMN value;
ALTER TABLE mytable RENAME COLUMN value_lo TO value;
...and finally, as PostgreSQL is an MVCC database and does not immediately delete all data, you should clean things up with either a VACUUM FULL
or a CLUSTER
.