performancegrafanak6

K6 scenarios to generate specific request per second rate


I wrote the following options with K6 scenarios for my performance testing in hoping to achieve traffics with different request per second while keeping the virtual users - VUs constant at 100.


    export let options = {
    scenarios: {
        foo: {
            executor: 'ramping-arrival-rate',
            preAllocatedVUs: 100, //start & end at 100 VUs

            startRate: baseRPS, timeUnit: '1s',
            stages: [
                { target: baseRPS * 10, duration: '0s' },   // jump to 10 iterations/s immediately
                { target: baseRPS * 10, duration: '30s' },  // stay at 10 iterations/s for the next 30 seconds
                { target: baseRPS * 20, duration: '0s' },   // jump to 20 iterations/s immediately
                { target: baseRPS * 20, duration: '30s' },  // stay at 20 iterations/s for the next 30 seconds
                { target: baseRPS * 30, duration: '0s' },   // jump to 30 iterations/s immediately
                { target: baseRPS * 30, duration: '30s' },  // stay at 30 iterations/s for the next 30 seconds
            ],
        },
    },
    insecureSkipTLSVerify: true,
    };

The execution shows K6 went through each stage accordingly. My goal is to identify if VUs contribute to the performance degradation or the request per second.

enter image description here

enter image description here

enter image description here

I am surprised when I see the K6 result. Because it shows on average, it only generates around 7 request per second.

enter image description here

Did I do something wrong here in the option configuration or the 100VUs is not enough to generate more request per seconds?

Furthermore, my Grafana dashboard which I streamed my K6 test results to shows the Request per second rate never really change that drastically throughout my test durations.

enter image description here


Solution

  • A VU can only send a single request at one point in time (if you want parallel requests, you need multiple VUs).

    The fastest response time for successful requests was measured at 2.69 seconds for your endpoint, the average being 11.87 seconds and the median 13.07 seconds. The slowest response was 16.5 seconds.

    From from the k6 report and the Grafana dashboard you can easily see that your VUs have been maxed out (there should also be a log message in your test output).

    Assuming you have 100 VUs and each VU can send one new request every ~2.7 seconds, that gives you a request rate of 100 requests per 2.7 second (100/2.7s), or normalized to 1 second = 37 rps. But your average is not 2.7 seconds, but 11.87 seconds; so you have a rate of 100/11.87s, i.e. 8.42 rps. Which is pretty close to the 7.011 rps reported for your HTTP requests (don't confuse the http_reqs rate with the iterations rate).

    So yes, you would need more VUs to generate your desired request rate; but! do note that more VUs mean more load on your service and the request rate might actually drop with more VUs (or the error rate might spike)! (your service is likely already congested. Read more about different kinds of tests at https://grafana.com/blog/2024/01/30/api-load-testing/)