I have a reference column-store table in my single store that have roughly 2 million rows and is running on 7 leaf nodes, everyday we get around 10k updates
Now updates are not good on single-store so I came up with 2 approaches:
Now, since this is a reference table, deletions and updations will occur on all the nodes which may block my partitions, is there any safe threshold, or number that I can use, let's say 2k updates every hour won't cause any issues or 10k deletes every hour won't break the system or block my partitions.
Based on your described scenario and your concern for updates causing potential issues or blockages, I'd recommend tuning the default_columnstore_table_lock_threshold. By default, this variable is set to 0, which means a value of 5000 is used. This default configuration already results in partition-level locking only when over 5000 rows in a partition are affected by the UPDATE or DELETE operations.
However, you can override this threshold by using the OPTION (columnstore_table_lock_threshold = ) hint in your UPDATE and DELETE queries1. For example:
UPDATE your_table SET column_name = 'new_value'
WHERE some_condition OPTION (columnstore_table_lock_threshold = 2000);
As for the safe threshold, it really depends on your specific use case and specs. In your case, I wold start with the default configuration and monitor the system performance. Then, adjust the threshold accordingly based on your observed performance and system requirements.