I have around 200 tables in an SQLite database with hundreds to millions rows. These tables are queried by many concurrent processes of an OLTP application.
Each table needs to be occasionally updated by a full refresh -- delete all rows and then insert different ones, all within a transaction. For largest tables this takes almost one minute, but such refresh only happens few times a day for a table.
I need to make sure that the readers do not need to wait for the refresh to finish -- should use old version of table data until the transaction completes, then use the new version. Any waits, if need be, should be in terms of milliseconds rather than seconds or minutes.
Can this be achieved, i.e. avoid any database locks or table locks blocking the readers? I am not concerned about writers, they can wait and serialize.
It is a Python application with pysqlite.
Use WAL mode:
WAL provides more concurrency as readers do not block writers and a writer does not block readers. Reading and writing can proceed concurrently.