I have a table that looks like this:
CREATE TABLE IF NOT EXISTS list (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tok TEXT,
sid TEXT NOT NULL,
aid TEXT,
hash TEXT,
qtt SMALLINT,
p DECIMAL,
UNIQUE (tok, sid, aid),
UNIQUE (sid, qtt, hash)
);
I want to create a dynamic insert function with node-pg, the query of which looks roughly like this:
INSERT (tok, sid, aid, qtt, hash, p)
VALUES ($1, $2, $3, $4, $5, $6)
ON CONFLICT (tok, sid, aid, qtt, hash)
DO UPDATE SET p = $6;
and when I try to insert values of (NULL, string, NULL, int, string, decimal)
(values that match one of the composite keys) it throws this error:
error: there's no unique or exclusion constraint matching the ON CONFLICT specification.
This has to do less with node-pg but how I made the INSERT
query itself it seems. My intent is for a single upsert query to handle list items of both "UNIQUE
types" in case of a conflict, no other questions about composite uniques give a firm answer on how to do it with composite keys with overlapping columns.
Assuming there will only be rows that match either one of the unique keys, what do I do with the insert or constraints to make it work properly? Is there a way to also keep it a dynamic one-query solution?
The DO UPDATE
variant of INSERT ... ON CONFLICT ...
works for a single "conflict target". The manual:
For
ON CONFLICT DO NOTHING
, it is optional to specify aconflict_target
; when omitted, conflicts with all usable constraints (and unique indexes) are handled. ForON CONFLICT DO UPDATE
, aconflict_target
must be provided.
You cannot conflate columns from multiple UNIQUE
constraints into one "conflict targget". That makes Postgres look for a single multicolumn constraint on all five columns, which is not there and results in the error message you reported.
ON CONFLICT (tok, sid, aid, qtt, hash)
You can "check" with the ON CONFLICT ... DO NOTHING
, which handles all constraints, including the two you mentioned. But that only suppresses the exception and does not avoid possible race conditions for a later UPDATE
. There is no clean "one-query solution".
Your desired workflow is inherently flawed.
If only one of both unique constraints is violated, and you then update the value column, but not the other columns, that makes the resulting state questionable. The row ends up with different (pre-existing) values for the remaining columns which, in turn, will trigger a unique violation for different values for the next INSERT
. The result of a bulk UPSERT suddenly depends on the sequence of input rows.
Also, two rows could raise a unique violation at once. Which shall be updated? (UPSERT is built around the premise to update one.) This is a hot mess.
The clean solution would be to have two separate tables with one PK each.
Or maybe two mutually exclusive partial unique indexes for the one table. All columns are nullable, except sid
which is the intersection between both unique indices. That would point in that direction. Is only one of (tok, aid)
and (qtt, hash)
NOT NULL
for every row?
With some luck you run Postgres 15 or later, and you can consolidate both unique constraints into one using the NULLS NOT DISTINCT
clause:
ALTER TABLE list
DROP CONSTRAINT list_sid_qtt_hash_key
, DROP CONSTRAINT list_tok_sid_aid_key
, ADD CONSTRAINT list_uni_key UNIQUE NULLS NOT DISTINCT (sid, qtt, hash, tok, aid);
Then all your problems go away. See:
Related: