performancecassandranosqlscyllaycsb

Correlation between throughtput and latency when benchmarking with YCSB


I'm using YCSB to benchmark a number of different NoSQL databases. However, when playing around with the number of client threads I have a hard time interpreting the throughput vs. latency results.

For example, when benchmarking cassandra running workload a (50/50 reads and updates) with 16 client threads the following command is executed:

bin/ycsb run cassandra-cql -p hosts=xx.xx.xx.xx -p recordcount=525600 -p operationcount=525600 -threads 16 -P workloads/workloada -s > workloada_525600_16_threads_run_res.txt

which gives the following output:

[OVERALL], RunTime(ms), 62751
[OVERALL], Throughput(ops/sec), 8375.962136061577
[TOTAL_GCS_PS_Scavenge], Count, 64
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 289
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.46055042947522745
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 64
[TOTAL_GC_TIME], Time(ms), 289
[TOTAL_GC_TIME_%], Time(%), 0.46055042947522745
[READ], Operations, 262650
[READ], AverageLatency(us), 1844.6075042832667
[READ], MinLatency(us), 290
[READ], MaxLatency(us), 116159
[READ], 95thPercentileLatency(us), 3081
[READ], 99thPercentileLatency(us), 7551
[READ], Return=OK, 262650
[CLEANUP], Operations, 16
[CLEANUP], AverageLatency(us), 139458.5
[CLEANUP], MinLatency(us), 1
[CLEANUP], MaxLatency(us), 2232319
[CLEANUP], 95thPercentileLatency(us), 19
[CLEANUP], 99thPercentileLatency(us), 2232319
[UPDATE], Operations, 262950
[UPDATE], AverageLatency(us), 1764.8220193953223
[UPDATE], MinLatency(us), 208
[UPDATE], MaxLatency(us), 95807
[UPDATE], 95thPercentileLatency(us), 2901
[UPDATE], 99thPercentileLatency(us), 7031
[UPDATE], Return=OK, 262950

Running the same operation with 32 threads I get:

[OVERALL], RunTime(ms), 51785
[OVERALL], Throughput(ops/sec), 10149.65723665154
[TOTAL_GCS_PS_Scavenge], Count, 124
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 310
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.5986289466061601
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 124
[TOTAL_GC_TIME], Time(ms), 310
[TOTAL_GC_TIME_%], Time(%), 0.5986289466061601
[READ], Operations, 262848
[READ], AverageLatency(us), 2947.844628834916
[READ], MinLatency(us), 363
[READ], MaxLatency(us), 194559
[READ], 95thPercentileLatency(us), 5079
[READ], 99thPercentileLatency(us), 11055
[READ], Return=OK, 262848
[CLEANUP], Operations, 32
[CLEANUP], AverageLatency(us), 69601.5625
[CLEANUP], MinLatency(us), 1
[CLEANUP], MaxLatency(us), 2228223
[CLEANUP], 95thPercentileLatency(us), 3
[CLEANUP], 99thPercentileLatency(us), 2228223
[UPDATE], Operations, 262752
[UPDATE], AverageLatency(us), 2881.930485781269
[UPDATE], MinLatency(us), 316
[UPDATE], MaxLatency(us), 203391
[UPDATE], 95thPercentileLatency(us), 4987
[UPDATE], 99thPercentileLatency(us), 10711
[UPDATE], Return=OK, 262752

The overall runtime is lower and thus, the throughput is higher, but the latencies are higher as well.

I'm not quite sure how to interpret these results, and how would you find the "appropriate" number of client threads to run?


Solution

  • In order to have a qualified benchmarks you should 1st define the SLA requirements you aim your system to achieve. Say your workload pattern is 50/50 WR/RD and your SLA requirements are 10K ops/sec throughput with 99th percentile latency < 10 millisec. Use YCSB -target flag to generate the needed throughput, and use various thread count to see which one meets your SLA needs.

    It makes a lot of sense that when more threads are used, the throughput increased (more ops/sec), but that comes at a latency price. You should look into the relevant database metrics to try and find your bottleneck - it can be the:

    You can read more about the Do's and Don't of DB benchmarking here