performancegrafanaload-testinginfluxdbk6

How to visualize endpoint test results individually when load testing with k6, influxDB, and Grafana?


I currently have a working version of a load testing environment, using k6, influxDB, and Grafana. Right now, it records requests to a number of endpoints and sends the output to a Grafana dashboard, which aggregates the results and gives me average, min, max, etc. values across all the requests.

I'm trying to figure out how to view those statistics on each endpoint individually. I already have the k6 results outputted to individual files for each endpoint. But they are always aggregated in Grafana.

My k6 script:

import http from 'k6/http';
import { sleep } from 'k6';

const config = JSON.parse(open("./"+__ENV.CONFIG_PATH+".json"));
const body = open("./"+__ENV.BODY_PATH+".json");

export const options = {
    stages: [
        { duration: '10s', target: 100 },
        { duration: '15s', target: 10 },
        { duration: '5s', target: 0 }
    ],
};

export default function () {
    http.request(__ENV.METHOD, [app base URL] + __ENV.API_PATH, body, config);
    sleep(1);
}

The command run for each endpoint being tested. Passing the configs as environment variables.

k6 run -e METHOD=POST -e API_PATH="..." -e BODY_PATH="..." -e CONFIG_PATH="..." --out influxdb=http://localhost:8086/k6 load.js > [results filename]

An example of a results txt file (for a single endpoint)


          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: load.js
     output: InfluxDBv1 (http://localhost:8086)

  scenarios: (100.00%) 1 scenario, 200 max VUs, 1m0s max duration (incl. graceful stop):
           * default: Up to 200 looping VUs for 30s over 3 stages (gracefulRampDown: 30s, gracefulStop: 30s)


running (0m00.9s), 019/200 VUs, 0 complete and 0 interrupted iterations
default   [   3% ] 019/200 VUs  00.9s/30.0s

running (0m01.9s), 039/200 VUs, 19 complete and 0 interrupted iterations
default   [   6% ] 039/200 VUs  01.9s/30.0s

running (0m02.9s), 059/200 VUs, 56 complete and 0 interrupted iterations
default   [  10% ] 059/200 VUs  02.9s/30.0s

running (0m03.9s), 079/200 VUs, 114 complete and 0 interrupted iterations
default   [  13% ] 079/200 VUs  03.9s/30.0s

running (0m04.9s), 099/200 VUs, 191 complete and 0 interrupted iterations
default   [  16% ] 099/200 VUs  04.9s/30.0s

running (0m05.9s), 119/200 VUs, 287 complete and 0 interrupted iterations
default   [  20% ] 119/200 VUs  05.9s/30.0s

running (0m06.9s), 138/200 VUs, 404 complete and 0 interrupted iterations
default   [  23% ] 138/200 VUs  06.9s/30.0s

running (0m07.9s), 158/200 VUs, 539 complete and 0 interrupted iterations
default   [  26% ] 158/200 VUs  07.9s/30.0s

running (0m08.9s), 178/200 VUs, 695 complete and 0 interrupted iterations
default   [  30% ] 178/200 VUs  08.9s/30.0s

running (0m09.9s), 198/200 VUs, 872 complete and 0 interrupted iterations
default   [  33% ] 198/200 VUs  09.9s/30.0s

running (0m10.9s), 200/200 VUs, 1061 complete and 0 interrupted iterations
default   [  36% ] 200/200 VUs  10.9s/30.0s

running (0m11.9s), 200/200 VUs, 1253 complete and 0 interrupted iterations
default   [  40% ] 200/200 VUs  11.9s/30.0s

running (0m12.9s), 200/200 VUs, 1446 complete and 0 interrupted iterations
default   [  43% ] 200/200 VUs  12.9s/30.0s

running (0m13.9s), 200/200 VUs, 1639 complete and 0 interrupted iterations
default   [  46% ] 200/200 VUs  13.9s/30.0s

running (0m14.9s), 200/200 VUs, 1835 complete and 0 interrupted iterations
default   [  50% ] 200/200 VUs  14.9s/30.0s

running (0m15.9s), 200/200 VUs, 2035 complete and 0 interrupted iterations
default   [  53% ] 200/200 VUs  15.9s/30.0s

running (0m16.9s), 200/200 VUs, 2235 complete and 0 interrupted iterations
default   [  56% ] 200/200 VUs  16.9s/30.0s

running (0m17.9s), 200/200 VUs, 2435 complete and 0 interrupted iterations
default   [  60% ] 200/200 VUs  17.9s/30.0s

running (0m18.9s), 200/200 VUs, 2635 complete and 0 interrupted iterations
default   [  63% ] 200/200 VUs  18.9s/30.0s

running (0m19.9s), 200/200 VUs, 2831 complete and 0 interrupted iterations
default   [  66% ] 200/200 VUs  19.9s/30.0s

running (0m20.9s), 200/200 VUs, 3028 complete and 0 interrupted iterations
default   [  70% ] 200/200 VUs  20.9s/30.0s

running (0m21.9s), 200/200 VUs, 3224 complete and 0 interrupted iterations
default   [  73% ] 200/200 VUs  21.9s/30.0s

running (0m22.9s), 200/200 VUs, 3415 complete and 0 interrupted iterations
default   [  76% ] 200/200 VUs  22.9s/30.0s

running (0m23.9s), 200/200 VUs, 3613 complete and 0 interrupted iterations
default   [  80% ] 200/200 VUs  23.9s/30.0s

running (0m24.9s), 200/200 VUs, 3809 complete and 0 interrupted iterations
default   [  83% ] 200/200 VUs  24.9s/30.0s

running (0m25.9s), 182/200 VUs, 4000 complete and 0 interrupted iterations
default   [  86% ] 182/200 VUs  25.9s/30.0s

running (0m26.9s), 142/200 VUs, 4179 complete and 0 interrupted iterations
default   [  90% ] 142/200 VUs  26.9s/30.0s

running (0m27.9s), 101/200 VUs, 4321 complete and 0 interrupted iterations
default   [  93% ] 101/200 VUs  27.9s/30.0s

running (0m28.9s), 063/200 VUs, 4418 complete and 0 interrupted iterations
default   [  96% ] 063/200 VUs  28.9s/30.0s

running (0m29.9s), 024/200 VUs, 4479 complete and 0 interrupted iterations
default   [ 100% ] 024/200 VUs  29.9s/30.0s

     data_received..................: 7.0 MB 228 kB/s
     data_sent......................: 738 kB 24 kB/s
     http_req_blocked...............: avg=692.22µs min=330ns   med=666ns    max=39.61ms p(90)=965ns    p(95)=1.36µs  
     http_req_connecting............: avg=78.13µs  min=0s      med=0s       max=20.26ms p(90)=0s       p(95)=0s      
     http_req_duration..............: avg=14.92ms  min=8.46ms  med=13.44ms  max=86.66ms p(90)=21.09ms  p(95)=24.97ms 
       { expected_response:true }...: avg=14.92ms  min=8.46ms  med=13.44ms  max=86.66ms p(90)=21.09ms  p(95)=24.97ms 
     http_req_failed................: 0.00%  ✓ 0          ✗ 4503 
     http_req_receiving.............: avg=166.01µs min=29.99µs med=78.51µs  max=32.71ms p(90)=143.23µs p(95)=184.04µs
     http_req_sending...............: avg=134.22µs min=55.7µs  med=114.03µs max=9.02ms  p(90)=187.06µs p(95)=256.66µs
     http_req_tls_handshaking.......: avg=602.32µs min=0s      med=0s       max=35.03ms p(90)=0s       p(95)=0s      
     http_req_waiting...............: avg=14.62ms  min=8.28ms  med=13.23ms  max=63.88ms p(90)=20.74ms  p(95)=24.57ms 
     http_reqs......................: 4503   146.220985/s
     iteration_duration.............: avg=1.02s    min=1.01s   med=1.01s    max=1.1s    p(90)=1.02s    p(95)=1.03s   
     iterations.....................: 4503   146.220985/s
     vus............................: 24     min=19       max=200
     vus_max........................: 200    min=200      max=200


running (0m30.8s), 000/200 VUs, 4503 complete and 0 interrupted iterations
default ✓ [ 100% ] 000/200 VUs  30s

Screenshot of what I'm seeing in Grafana:

Grafana Screenshot


Solution

  • I ended up stumbling across a solution thanks to the helpful comments by @knittl and @markalex which pointed me in the right direction.

    As it was mentioned above in the comments section, k6 tags all metrics with url.name (among other things) by default. So the solution was to group by url, which is represented with the variable name in this community dashboard I found online. I'll post a link to it below for anyone who requires the same solution I did.

    SELECT count("value"), min("value"), median("value"), max("value"), mean("value"), percentile("value", 95) FROM /^http_req_duration$/ WHERE time >= now() - 24h and time <= now() GROUP BY "name"
    

    https://grafana.com/grafana/dashboards/13719-k6-load-testing-results-by-groups/