performance-testingloadrunnerhp-performance-center

Am I applying Little's Law correctly to model a workload for a website?


Using these metrics (shown below), I was able to utilize a workload modeling formula (Little’s Law) to come up with what I believe are the correct settings to sufficiently load test the application in question.

From Google Analytics:

The formula is N = Throughput * (Response Time + Think Time)

Using the formula, we calculate N as 100 (1.35 Throughput * 74.21 (Response Time + Think Time)).

Therefore, according to my calculations, we can simulate the load the server experienced on the peak day during the peak hour with 100 users going through the business processes at a pace of 75 seconds between iterations (think time ignored).

So, in order to determine how the system responds under a heavier than normal load, we can double (200 users) or triple (300 users) the value of N and record the average response time for each transaction.

Is this all correct?


Solution

  • When you do a direct observation of the logs for the site, blocked by session duration, what are the maximum number of IP addresses counted in each block?

    Littles law tends to undercount sessions and their overhead in favor of transactional throughput. That's OK if you have instantaneous recovery on your session resources, but most sites are holding onto them for a period longer than 110% of the longest inter-request window for a user (the period from one request to the next).