We have been using the PageSpeed Insights API to track lighthouse data for some pages. Running a Lighthouse audit in Chrome using the same throttling settings is consistently reporting different numbers and it is not clear why. Can someone help explain this?
For example:
Using the PageSpeed Insights API, we are tracking the lighthouse Total Blocking Time. We get this data by sending a request to. https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url=[MY_URL]&key=[MY_API_KEY]&strategy=desktop&categorty=performance
The response object contains
{ lighthouseResult: {
audits {
total-blocking-time:
}
}
}
From this response, we are regularly seeing a total blocking time between 800-1000ms. This lines up with the Total Blocking Time we see when auditing the site on pagespeed.web.dev
- see this screenshot for an example.
When I run lighthouse via Chrome dev tools or via the Lighthouse CLI, our total blocking time is consistently reported between 200-300ms. Our other metrics are consistently better as well - see this screenshot.
Lighthouse via Chrome dev tools shows that it is using the same throttling settings as what we see on PageSpeed Insights - see these throttle settings from Chrome
Why are these values consistently so far off between the two sources that say they are reporting with the same data?
I have tried running Lighthouse audits repeatedly, both via Chrome dev tools and the PageSpeed Insights API. The lab data in PageSpeed Insights consistently does not match what we see in Chrome dev tools.
Lighthouse does a 4x slowdown on both PSI and DevTools.
This means if you're on a very fast developer machine, the 4x slowdown won't be the same as someone running on a slower machine.
If you hover over the Device icon you get some more details:
Here in Dev Tools you can see my machine is baselined at 2635. With a 4x slowdown that would give a ~659 device:
On PageSpeed Insights however the device speed was 1298, so ~325 after slow down is applied:
This is half the speed of the Dev Tools version, and so it's no surprise that PSI reports worse numbers, particularly for CPU dependent metrics like TBT.
Additionally running it locally will be dependent on your local setup (what Chrome extensions you're running, what else the machine is doing at the time of the test...etc.)
Throttling is complicated (see the Lighthouse docs on it here) and at present there isn’t the ability to change the settings using by either PSI or Dev Tools Lighthouse (the CLI offers more customisation options).
However, performance is not a single number, and your real users will be on varying devices so neither PSI, nor local dev tools is the "right" answer. What's more important is to compare like for like (i.e. baseline in Dev Tools, then run with changed in Dev Tools) rather than necessarily striving for an exact replication of another tool.