asp.net-core.net-coreasync-awaitconcurrency.net-8.0

Will awaiting a Task makes a HTTP request slower?


In an ASP.NET Core controller, we need to fetch some data from database. Using EF Core we have to options: ToList() and ToListAsync(). This is my understanding of the difference between two and I wanted to know if I'm right:

So, my question is if we know the fetching is very fast (same machine or same datacenter) and we know there are not many users sending requests (example: an admin panel website with 15-20 users) is it actually better to use the synchronous version and avoid context switching?


Solution

  • In the nutshell your understanding can be argued to be correct. As stated in the worth reading Asynchronous Programming in .NET - Introduction, Misconceptions, and Problems article:

    Misconception #2: Async / Await Makes Your Code Run “Faster”

    Using async / await is not about making code “run faster”, it’s going to run slower than a similar synchronous method (and utilize more memory). Instead, efficiency (or in the case of UI apps, offloading) is what is gained by using async / await. I imagine this misconception occurs when running things in parallel becomes conflated with code running faster in general.

    And from the Asynchronous Programming - Async Performance: Understanding the Costs of Async and Await by Stephen Toub:

    Asynchronous methods are a powerful productivity tool, enabling you to more easily write scalable and responsive libraries and applications. It’s important to keep in mind, though, that asynchronicity is not a performance optimization for an individual operation. Taking a synchronous operation and making it asynchronous will invariably degrade the performance of that one operation, as it still needs to accomplish everything that the synchronous operation did, but now with additional constraints and considerations. A reason you care about asynchronicity, then, is performance in the aggregate: how your overall system performs when you write everything asynchronously, such that you can overlap I/O and achieve better system utilization by consuming valuable resources only when they’re actually needed for execution.

    As for your questions:

    until the data has been fetched from DB
    So, my question is if we know the fetching is very fast (same machine or same datacenter)

    There are several potential caveats here:

    1. Database locality is not the only source of the potential delays in the response time, you query can be "heavy" or has some transactions involved, or your database can be just overloaded for some reasons resulting in the request taking "noticable" amount of time
    2. While the same machine can be "close" enough to make a noticable impact when using async processing (actually I have seen noticable impact on CPU consumption in one app which switched to async working over TCP/IP, but this was a quite rare case), for the datacenter this can be not the case.

    and we know there are not many users sending requests (example: an admin panel website with 15-20 users) is it actually better to use the synchronous version and avoid context switching?

    Arguably if you have low RPS then in general you should not care, the impact should be barely noticable. Of course as in all performance related questions you need to thoroughly test your actual setup (actual hardware, software and usage/load patterns), but in this concreate case sounds like it is not worth time spent (unless you are very-very interested in the task and want to try some tools/approaches/etc. for science).