rustreqwest

Fast request sending in Rust


I'm using reqwest library for request sending in my old service written on rocket. The service job contains of two parts:

  1. Take incoming request body with serde_json library
  2. Send this body to another N services using reqwest library

But there is "bottleneck" problem here. When my service once get more then 500 request per second, scheduler trying to switch between them and causes huge CPU usage (almost 800% in docker stats)

It's like a worker pool problem..?

If anyone has any ideas on how to solve this problem, I would be very grateful.

UPD: code example

pub async fn handler(data: Json<Value>) {
   let data = data.take().to_string();
   for url in urls {
       match Client::new().post(url)
       .header("Content-Type", "application/json")
       .body(data)
       .send().await{
           ...
       }
   }
}

Solution

  • We have had this problem in production before. We were using reqwest::get() instead, but the problem is the same: you are creating a single client per request. Connection reuse/pooling happens at the client level, so if you create a client for each request, you cannot reuse connections at all. This results in:

    All of this overhead was enough to bring one of our services to its knees when it got very busy.

    The solution is to create a single reqwest::Client and share it around. Note that internally, clients have shared ownership of a pool. This means you can cheaply .clone() a client and all clones will share the same connection pool.

    There's two straightforward ways to implement this strategy:

    Note that if you want to enable in-process DNS response caching, you need to add the hickory-dns feature to your reqwest crate dependency, and enable this feature when you create the client. For example:

    static REQWEST_CLIENT: LazyLock<Client> =
        LazyLock::new(|| Client::builder().hickory_dns(true).build().unwrap());