pagespeedpagespeed-insights

What are causes of the differece between CoreWebVitals assessment and PageSpeen Insights performance grade?


While testing different sites with PageSpeed Insights test I experience same phenomenon, which I can't explain by myself:

There are multiple URLs passing Core Web Vitals assessment, but having all red metrics and low Performance grade, like on the screenshot:

enter image description here

Yes, I know, Core Web Vitals assessment is calculated by aggregating of loading times of Chrome users in the last 28 days. But this remains unexplained for me: how can it be, that loading metrics have such a huge difference between Core Web Vitasl assessment and performance diagnose?

And: it is not a single case or single URL. I see this behavior often, too often to think it could be an accident.

Could somebody explain me these differences?

PS: there is a meaning about this difference, which would excellently explain it, but I very doubt in it. The meaning is:


Solution

  • There are a number of possible differences, much of which is covered in the Goggle Chrome team's guidance, for example in out Core Web Vitals workflows with Google tools doc and in each of our Optimize guides (LCP, CLS). Full disclosure, I helped write these.

    First up, as Rick alluded to in the comments make sure you are comparing like for like. Often real-user data is not available for specific URLs so the top section may be for the origin as a whole rather than just that URL.

    Next, Lighthouse does a simiulated load, under specific conditions. It also does a cold load of the above the fold content only, without interaction.

    Looking at this example specifically (and assuming it is URL-level data, or at least representative of that URL):

    LCP and FCP can be a lot faster (or slower) for real users than for Lighthouse depending on:

    For CLS differences here typically indicate post-load CLS as discussed in our guidance. CLS is measured throughout the life of the page and while it's true CLS is often bad during page load (which is all that Lighthouse measures), some can happen later. For example, if you scroll and lazy-loaded images or ads pop in without reserved spaces. I often see scroll issues with poorly implemented sticky headers too. Interacting with a page can also cause CLS if it happens after the 500 ms grace period for an interaction.

    So in summary Lighthouse may or may not be representative of how real users experience your site.

    Does this mean Lighthouse is useless? Absolutely not! Lighthouse is incredibly useful to identify potential performance issues. For loading issues (FCP and LCP) these can be worse than real life since it does a simple, cold load of the page. But optimizing the worse case scenario will benefit your site!

    For CLS (and similarly for INP), this is more a limitation of what a simple page load does. So where this matches you can use its advice to improve the real life metrics. Where it doesn't, you at least have eliminated load issues as the likely cause and can look where else these might be slow.