I have a program that synchronizes records in two databases. It relies on primary key violation on insert to know that record already exists, and then it makes applies an update to that record. That way, I avoid executing explicit SQL query for checking for the existence of a given record.
But for one particular table, I have primary key and unique index (for different columns than primary key). When I try insert a duplicate record in it, unique index violation occurs first, so the primary key violation does not happen, so the program algorithm fails.
Is there a way I can ensure that the primary key violation always occurs first? Or is there a explicit rule which check is made first?
Below is the DDL of the table:
CREATE TABLE BONS
(
BON_ID INTEGER NOT NULL,
BON_OPENTIME TIMESTAMP NOT NULL,
BON_CLOSETIME TIMESTAMP NOT NULL,
BON_SALESMANID INTEGER NOT NULL,
BON_POSID INTEGER NOT NULL,
BON_TABLENO SMALLINT,
BON_NUM INTEGER NOT NULL,
...
);
/******************************************************************************/
/**** Primary key ****/
/******************************************************************************/
ALTER TABLE BONS ADD CONSTRAINT PK_BON_ID PRIMARY KEY (BON_ID);
/******************************************************************************/
/**** Indices ****/
/******************************************************************************/
CREATE UNIQUE INDEX IDX_UNQ_BON_POSID_NO ON BONS (BON_POSID, BON_NUM);
With some experimenting, it seems like the order of creation decides which is fired first, the primary key or the unique key (or more specifically, the value of its RDB$INDICES.RDB$INDEX_ID
), which suggest that the unique index was created before the primary key, or at least received a lower RDB$INDEX_ID
, which can also happen if another index existed on the table before the primary key, and that index was dropped before the unique index was added.
Just dropping and recreating the unique index will not be sufficient. Indexes have an id per table, and if you drop one or more, subsequent creation of new indexes will reuse those ids. You will need to drop both the unique index and the primary key, and then recreate them in the desired order.
However, some more testing shows this is not guaranteed to be deterministic across a backup and restore. On backup, indexes are stored in the backup file in their physical storage order in the RDB$INDICES
metadata table, and on restore, they are restored in that order, and the first index of a table will get RDB$INDEX_ID = 1
, the second 2
, and so on, which means their evaluation order can change.
Dropping both indexes in one transaction, and recreating them in another single transaction will make it likely their physical storage order is the same as their logical storage order, but this is not guaranteed (e.g. the first index record might not fit on the free space of an earlier data page, but the second index might).