I have an external service notification hitting an endpoint when there has been a change in my database. This pulls the updated data from the database and fires it out to connected clients via SignalR.
This service can send multiple calls within the span of a few seconds, but I'd like to only respond to the final one. I'd like to throttle (not sure if that's the right word in this case) the calls to the database and notifications to the client. The important part here is I'd like to respond to the FINAL request withing a timeframe as I don't want to miss any changes sent to clients.
Here's an example of the problem I'd like to avoid. This is a log of hits to the API ending in a deadlock error (I can handle that, but would like to avoid the whole situation altogether).
RefreshOrderNotifications received at Tue Oct 03 2023 09:34:19 GMT-0400 (Eastern Daylight Time)
RefreshOrderNotifications received at Tue Oct 03 2023 09:34:57 GMT-0400 (Eastern Daylight Time)
RefreshOrderNotifications received at Tue Oct 03 2023 09:34:57 GMT-0400 (Eastern Daylight Time)
RefreshOrderNotifications received at Tue Oct 03 2023 09:34:57 GMT-0400 (Eastern Daylight Time)
Server - Error: Notification Server connection to DB failed. Transaction (Process ID 68) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I'm having a hard time conceptualizing how to approach this problem. As each request is on a different thread, I'm not quite sure how to communicate across those threads. Would a static property even be useful or wise in this case? I'm not sure if it's shared across threads.
I can stand to have a 1-3 second delay in the response to the API call, if that's helpful.
Adding clarification based on comments (All good questions, thank you)
Second edit: Based on the really helpful response from @qkhanhpro, I put this together. It has some holes as the actions aren't atomic (I believe that's how you refer to it), so if there is a quick succession of calls at the same time, then they will slip by. This is as good as I need for now. I'll clean it up and do some further testing.
public class ThrottleManager
{
private List<string> _queue = new List<string>();
public async Task<ThrottleResult> QueueThenRun(string runKey, int delayMilliseconds, Func<Task<ThrottleResult>> funcToRun)
{
var result = new ThrottleResult();
if (_queue.Contains(runKey) || funcToRun == null)
{
//we're just queuing up the call, so return success
return result;
}
_queue.Add(runKey);
await Task.Delay(delayMilliseconds);
_queue.Remove(runKey);
var returnMe = await funcToRun();
return returnMe;
}
}
public class ThrottleResult
{
public bool Success { get; set; } = true;
public string Message { get; set; } = string.Empty;
public Exception? Exception { get; set; } = null;
}
You can create a singleton class that hold a single ConcurrentDictionary. The "Key" would be the criteria you would like to "group" by, such as the OrderId
or UserId
, the Value should be a Task
Whenever a request/thread hit you