In an ASP.NET Core controller, we need to fetch some data from database. Using EF Core we have to options: ToList()
and ToListAsync()
. This is my understanding of the difference between two and I wanted to know if I'm right:
ToList()
the thread has to wait until the data has been fetched from DB. Depending on where our database server is, this may be very quick (on the same machine) or very slow (other side of the planet), but if we use ToListAsync()
the thread will be freed and it has the chance to pickup another HTTP request, so, while our data is being fetched for request A, the freed-up thread may finish request B.So, my question is if we know the fetching is very fast (same machine or same datacenter) and we know there are not many users sending requests (example: an admin panel website with 15-20 users) is it actually better to use the synchronous version and avoid context switching?
In the nutshell your understanding can be argued to be correct. As stated in the worth reading Asynchronous Programming in .NET - Introduction, Misconceptions, and Problems article:
Misconception #2: Async / Await Makes Your Code Run “Faster”
Using
async
/await
is not about making code “run faster”, it’s going to run slower than a similar synchronous method (and utilize more memory). Instead, efficiency (or in the case of UI apps, offloading) is what is gained by usingasync / await
. I imagine this misconception occurs when running things in parallel becomes conflated with code running faster in general.
And from the Asynchronous Programming - Async Performance: Understanding the Costs of Async and Await by Stephen Toub:
Asynchronous methods are a powerful productivity tool, enabling you to more easily write scalable and responsive libraries and applications. It’s important to keep in mind, though, that asynchronicity is not a performance optimization for an individual operation. Taking a synchronous operation and making it asynchronous will invariably degrade the performance of that one operation, as it still needs to accomplish everything that the synchronous operation did, but now with additional constraints and considerations. A reason you care about asynchronicity, then, is performance in the aggregate: how your overall system performs when you write everything asynchronously, such that you can overlap I/O and achieve better system utilization by consuming valuable resources only when they’re actually needed for execution.
As for your questions:
until the data has been fetched from DB
So, my question is if we know the fetching is very fast (same machine or same datacenter)
There are several potential caveats here:
and we know there are not many users sending requests (example: an admin panel website with 15-20 users) is it actually better to use the synchronous version and avoid context switching?
Arguably if you have low RPS then in general you should not care, the impact should be barely noticable. Of course as in all performance related questions you need to thoroughly test your actual setup (actual hardware, software and usage/load patterns), but in this concreate case sounds like it is not worth time spent (unless you are very-very interested in the task and want to try some tools/approaches/etc. for science).