Whenever I use a small statement for example:
DELETE FROM c_ordertax WHERE (c_order_id,c_tax_id) IN ((183691598,1000862),(183691198,1000862));
It executes perfectly... but if I execute a lengthy statement to delete say 18755 records with these scalar values, it says "max_stack_depth" exceeded... this option in postgresql.conf has been set to 2MB and the query that threw the error doesn't even amount to 2MB, its just 300kb
Note: No Triggers are attached in the table
And one thing I noticed about other queries is, when I use single value in IN clause eg: DELETE FROM c_ordertax WHERE (c_order_id) IN ((183691598),(183691198));
they dont have any issues and however lengthy the query maybe, it executes perfectly...
My current options are:
So My Question is what is the maximum number of scalar values that can be used in an IN clause... if the number of fields in scalar value increases, is there a formula that can be used to determine the maximum number of scalar values that can be used eg:
5 values with 2 fields => ((1,2),(1,2),(1,2),(1,2),(1,2))
2 values with 3 fields => ((1,2,3),(1,2,3))
Any Database Mastermind encountered these kinda issues? If so How do I tackle it?
It should work if you rewrite the list of scalar values to a values()
list:
DELETE FROM c_ordertax
using (
values
(183691598,1000862),
(183691198,1000862)
) as t(ord_id,tax_id)
WHERE c_order_id = t.ord_id
and c_tax_id = t.tax_id;
I tried this with 10000 pairs in the values
list and it did not throw an error. That was with Postgres 11 however. I don't have 9.3 available right now.