We have a slow view which is joining multiple tables. The view will be called by Power BI and any interaction on BI is slow.
So I copied all data from the view to a new temp table. It is much faster. However I am not sure how should I dynamically maintain this new table. Maybe I just need a scheduling stored procedure to truncate and insert to this table? Is there better approach to do it?
I come across the idea "Materialized view" but we cannot use it because our tables will be deleted and updated from time to time. (The materialized view will be disabled when an UPDATE or DELETE occurs in the referenced base tables. This restriction doesn't apply to INSERTs. To re-enable the materialized view, run ALTER MATERIALIZED VIEW with REBUILD. )
Creating a table with data to replace a slow view is an old-school but perfectly acceptable approach.
For ease of communication, let's call the table you created a 'reporting table'
One big advantage is that it keeps it simple without needing to use other features etc. One disadvantage is that the data may be somewhat out of date (depending on frequency you update the reporting table).
In terms of the table used, first consider its structure - particularly the clustered index. It should, ideally, be the one most used for filtering the data when people are using the report.
Once the structure is complete, create a stored procedure (as you suggested) but instead of truncate/re-insert, I suggest pre-calculating the data and only changing the rows that matter.
For example, the stored procedure should do the following
At this point, there have been no updates to the reporting table (therefore nobody locked out or getting half-updated data). The #temp table has rows to add, and #tempdelete has rows to delete. Note that if a row would have been updated, then it will be deleted then re-inserted with the new values.
Now, within a transaction, delete the rows to delete from #tempdelete, and add the rows from #temp. It should be very quick - only affecting changed, deleted, or new records. The transaction means that nobody will get data that had the deletions removed, but the new ones not added.
If this is likely to be run very frequently (and/or multiple times concurrently) you will need to add some sort of thread-safety to it. However, if it's only ever run once at a time, then you won't need to do the above.