I am using Microsoft Sync-Framework 2.1 on a azure SQL database. The table I am provisioning for sync, has approx 1M records. When the sync scope is being created, sync framework creates a XXX_tracking table with one row per original record. Creating this table is very slow on azure. The query that is being executed looks like this:
INSERT INTO [Transactions_tracking]
([Id], [create_scope_local_id], [local_create_peer_key], [local_create_peer_timestamp], [update_scope_local_id], [local_update_peer_key]
, [sync_row_is_tombstone], [PointOfSaleId], [ExecutedTime])
SELECT [base].[Id], NULL, 0, @@DBTS+1, NULL, 0, 0, [base].[PointOfSaleId], [base].[ExecutedTime]
FROM [Transactions] [base] LEFT OUTER JOIN [Transactions_tracking] [side] ON [base].[Id] = [side].[Id] WHERE [side].[Id] IS NULL;
On SQL Express this takes 19s, while on azure with 50 DTUs it takes 619s which I really cannot explain.
Any ideas? Thanks Travis
Probably that bulk load process is hitting the DTU limits and throttling is occurring. Premium tiers are intended for I/O intensive workloads, you can scale up to premium tiers before these type of workloads run and scale down to the original tier when these workloads no longer exist.
An example of how Premium Tiers provide the better performance for IO operations and transactions could be comparing a Standard S1 with a Premium P2. The Standard S1 has an insert has a limit of 1.4 MB per minute where a Premium P2 has an insert limit 13.5 MB per minute.
You can sometimes throttle your own process by using "wait for delay" statements, but that's only if the problem is your inserts come in bursts that exceed DTU limits temporarily. If the data exceeds DTU limits for hours at a time, you just need more DTUs.
Azure documentation here talks about the importance of using batching to improve insert performance also.