When running a Gatling test, a report is printed to stdout after execution, such as:
21:16:16 ---- Global Information --------------------------------------------------------
21:16:16 > request count 85445 (OK=85433 KO=12 )
21:16:16 > min response time 7 (OK=79 KO=7 )
21:16:16 > max response time 60001 (OK=4299 KO=60001 )
21:16:16 > mean response time 256 (OK=252 KO=25460 )
21:16:16 > std deviation 469 (OK=103 KO=29221 )
21:16:16 > response time 50th percentile 236 (OK=236 KO=5026 )
21:16:16 > response time 75th percentile 290 (OK=290 KO=60000 )
21:16:16 > response time 95th percentile 416 (OK=416 KO=60001 )
21:16:16 > response time 99th percentile 577 (OK=577 KO=60001 )
21:16:16 > mean requests/sec 47.47 (OK=47.46 KO=0.01 )
21:16:16 ---- Response Time Distribution ------------------------------------------------
21:16:16 > t < 800 ms 85188 ( 99.7%)
21:16:16 > 800 ms <= t < 1200 ms 160 ( 0.19%)
21:16:16 > t >= 1200 ms 85 ( 0.1%)
21:16:16 > failed 12 ( 0.01%)
The test executed 85445 requests, of which 12 failed (KO). 85433 were successful (OK). So far so good (85433+12 = 85445).
What I don't understand are the response time metrics. Let's take the mean response time. 256
seems to be the mean response time. But what does it mean to have 252 successful (OK) "mean response times"?
The questions apply to the other response time metrics as well. How can the std deviation be OK or KO (successful or failed, respectively)?
The Reporting and analysis > Reports > Open-Source Reference only seems to explain the HTML report which is not showing these numbers.
mean response time
is in milliseconds.
OK and KO stats are filtered on request status.
256ms is the overall mean response time. 252ms is the mean response time for successful requests (OK). 25,460ms is the mean response time for failed requests (KO). It's probably way higher because it contains timed out requests.