We're trying to stream data to postgres 11, using the following query:
INSERT INTO identifier_to_item
values (:id, :identifier_value, :identifier_type, :identifier_manufacturer, :delivery_timestamp_utc, :item)
ON CONFLICT (identifier_value, manufacturer, type) DO UPDATE
SET item = :item, delivery_timestamp_utc = :delivery_timestamp_utc
WHERE identifier_to_item.delivery_timestamp_utc < :delivery_timestamp_utc
Basically "insert record in the table, if it already exists -> optionally override some fields based on the data already stored in the database".
We would like to hook this query to message queue and run it in high concurrent environment within several instances. It is possible that the same row will be accessed from different connections using this query. For us it's critical that only items with highest delivery timestamp will eventually make it to the table
According to documentation:
but is also accessing the fields in UPDATE WHERE part atomic and thread safe? Is this statement using some kind of pessimistic row/table locking?
PostgreSQL is not using threads on the server side.
PostgreSQL does not implement pessimistic/optimistic row level locking : it is the left to the application to decide to implement pessimistic or optimistic locking.
PostgreSQL does not escalate row level locks to table lock.