In the below code, I set up an axum server to handle incoming requests asynchronously, where the handler awaits another async function (in this case sleeping for 500ms). I have a test script that fires off 12 requests, each numbered, and listens for their responses.
My expected behaviour is that request_handler
should be invoked 12 times in rapid succession, and roughly half a second later the 12 responses should be sent, in a similar rapid burst. However, after handling the first 6 requests (equal to the number of physical cores on my system), the server hangs for around 10 seconds before even invoking the handlers for the other 6 requests. What is causing this behaviour?
See the bottom of the post for readouts of the timestamps reported by both processes.
main.rs
use std::time::Duration;
use axum::{
body::Bytes, http::{HeaderMap, Response, StatusCode}, response::IntoResponse, routing::post, Extension, Router
};
use tokio::time::{sleep, Instant};
use tower_http::cors::{Any, CorsLayer};
use once_cell::sync::Lazy;
use chrono::Local;
static TIME_START_PROGRAM: Lazy<Instant> = Lazy::new(Instant::now);
#[tokio::main]
async fn main() {
let cors = CorsLayer::new()
.allow_origin(Any)
.allow_methods(Any)
.allow_headers(Any);
let app = Router::new().route("/", post(request_handler)).layer(cors);
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
axum::serve(listener, app).await.unwrap();
}
async fn request_handler(headers: HeaderMap, body: Bytes) -> impl IntoResponse {
let number = headers
.get("X-Request-Id")
.and_then(|h| h.to_str().ok())
.unwrap_or("UNKNOWN");
println!("request_handler [{number}] @ {}", get_timestamp());
let result = generator(number).await;
Response::builder()
.status(StatusCode::OK)
.body(format!("[{}] {result}", get_timestamp()))
.unwrap()
}
async fn generator(number: &str) -> String {
sleep(Duration::from_millis(500)).await;
format!("Handler received request ID: {number}")
}
fn get_timestamp() -> String {
Local::now().format("%H:%M:%S%.3f").to_string()
}
test.ps1
$jobs = @()
for ($i = 0; $i -lt 12; $i++) {
$index = $i # capture loop variable
$timestamp = Get-Date -Format "HH:mm:ss.fff"
Write-Output "Sending request $index at $timestamp"
$jobs += Start-Job -ScriptBlock {
$headers = @{ "X-Request-Id" = "$using:index" }
try {
$response = Invoke-RestMethod -Uri http://localhost:3000/ -Method Post -Headers $headers -Body ""
"Response $using:index: $response"
} catch {
"Response $using:index: ERROR - $_"
}
}
}
# Wait for all jobs to finish and print results
$jobs | ForEach-Object {
$result = Receive-Job -Job $_ -Wait
Write-Output $result
Remove-Job -Job $_
}
main.rs - Output
request_handler [1] @ 21:19:00.726
request_handler [2] @ 21:19:00.813
request_handler [3] @ 21:19:00.898
request_handler [4] @ 21:19:02.010
request_handler [5] @ 21:19:03.994
request_handler [8] @ 21:19:13.549
request_handler [6] @ 21:19:13.549
request_handler [9] @ 21:19:13.549
request_handler [7] @ 21:19:13.549
request_handler [10] @ 21:19:13.549
request_handler [11] @ 21:19:13.563
test.ps1 - Output
Sending request 0 at 21:18:58.422
Sending request 1 at 21:18:58.502
Sending request 2 at 21:18:58.580
Sending request 3 at 21:18:58.660
Sending request 4 at 21:18:58.741
Sending request 5 at 21:18:58.819
Sending request 6 at 21:18:58.913
Sending request 7 at 21:18:58.998
Sending request 8 at 21:18:59.077
Sending request 9 at 21:18:59.154
Sending request 10 at 21:18:59.232
Sending request 11 at 21:18:59.310
Response [21:19:01.167] Handler received request ID: 0
Response [21:19:01.306] Handler received request ID: 1
Response [21:19:01.322] Handler received request ID: 2
Response [21:19:01.404] Handler received request ID: 3
Response [21:19:02.524] Handler received request ID: 4
Response [21:19:04.496] Handler received request ID: 5
Response [21:19:14.061] Handler received request ID: 6
Response [21:19:14.061] Handler received request ID: 7
Response [21:19:14.061] Handler received request ID: 8
Response [21:19:14.061] Handler received request ID: 9
Response [21:19:14.061] Handler received request ID: 10
Response [21:19:14.076] Handler received request ID: 11
I managed to figure out the underlying cause of the original issue I was running into, which was not appropriately demonstrated by this example.
I was running into an apparent bottleneck in request processing because of the client, not the server. Since the original program was sending bulk requests over HTTP/1 from JavaScript running in a browser, the browser was capping the maximum number of connections to 6. The fetch request would resolve into a Promise immediately, but requests would only be sent once a connection was freed up by receiving a response.
The solution was to configure axum
to use HTTP/2, which allows many requests to be handled over the same connection automatically. This adds a requirement for TLS, however.
use axum_server::tls_rustls::RustlsConfig;
use std::net::SocketAddr;
...
let app = Router::new()
.route("/mesh", post(get_mesh))
.layer(cors)
.layer(Extension(generator));
let config = RustlsConfig::from_pem_file("./cert.pem", "./key.pem")
.await
.unwrap();
let addr = SocketAddr::from(([127, 0, 0, 1], 3000));
axum_server::bind_rustls(addr, config)
.serve(app.into_make_service())
.await
.unwrap();
For development, it was straightforward to use OpenSSL to generate a self-signed certificate, and add it to my Chrome instance as trusted. To correctly support TLS, this configuration was used for OpenSSL:
cert.conf
[req]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn
x509_extensions = v3_req
[dn]
CN = localhost
[v3_req]
keyUsage = keyEncipherment, dataEncipherment, digitalSignature
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
IP.1 = 127.0.0.1