pagespeedlighthousepagespeed-insightscore-web-vitalstime-to-first-byte

How can I have a bad Time to First Byte on Core Web Vitals, but a good TTFB on all Pagespeed tests?


Question

I'm trying to find an explanation for the following inconsistency on Google's PageSpeed Insights (Lighthouse):

The "Core Web Vitals Assessment" section at the top always lists a quite bad TTFB. For instance, it looks like this for a random static HTML document without any additional dependencies (700ms):

TTFB is 0.7s on Core Web Vitals

However, on the same result page Google also lists the "Initial server response time was short" test (in the "passed audits" section) much much better (40ms): initial server response time is only 40ms

In this example, that's a difference of 1750%!

How is this possible?


Additional Thoughts

Both metrics are supposed to describe the same, aren't they?

I do understand that the Core Web Vitals is supposed to be "what your real users are experiencing", i.e. analytical data collected on previous web site calls of real users. In contrast, the other value describes a single test snapshot as performed in that moment.

It's not as easy as "I just had a lucky shot, but usually it takes longer." though. I performed lots of tests from different locations, at different times, using various devices etc. and all manual tests are pretty fast. Only the Core Web Vitals are much worse and I can't find an explanation for it.

"Users having a slower Internet connection" isn't an explanation either, right? I could understand the difference until the last part of the page is there, but how can the very first byte be affected by this in such a drastic way?


Solution

  • Both metrics are supposed to describe the same, aren't they?

    No they are not the same. Lighthouse deliberately avoids using the term TTFB and uses Server time because they are different.

    Lighthouse typically normalises URLs. So if you enter https://web.dev/ttfb and that redirects to https://web.dev/ttfb/ by adding a trailing slash, then Lighthouse will run with the trailing slash. PageSpeed Insights will warn you of this, but your users may be using the pre-redirect URL so won't get this normalisation:

    PageSpeed Insights example showing it was run with a redirect

    Additionally many of your users will not come directly to the correct URL either. They may be going through link shorteners (e.g. t.co for Twitter), or via an Ad that runs through several redirects before actually requesting the page - and yes redirects count towards TTFB and so also FCP and LCP

    Event without these, Lighthouse does not currently show TTFB, but only shows the server response time (it strips out any DNS and Redirect time to just show this portion).

    "Users having a slower Internet connection" isn't an explanation either, right? I could understand the difference until the last part of the page is there, but how can the very first byte be affected by this in such a drastic way?

    This absolutely can be a reason, even if there are no redirects discussed above. PageSpeed Insights runs from a server and is permanently connected to the internet, whereas your users may not be. It's true that Lighthouse attempts to simulate a slowed down connection, but that is an estimate. Real users may be connecting from far away countries, or patching mobile networks out in the countryside, where there is a significant delay from clicking on a link or typing a URL, before the page is even requested.

    Repeatedly running a PSI test may give different results than to real users depending on how your infrastructure is set up. Is it cached at a CDN edge node so the PSI run is repeatedly hitting that cached version and very quick, whereas a user connecting to another CDN edge node may not have that cached and so have to wait until the CDN goes all the way back to the origin? Is the server always running or is there a booting up time if something has not been requested in some time, so again infrequently requested resources (whether for that CDN edge node, or just a lesser trafficked page) may have different server response times for repeat tests than for other users.

    Finally, your screenshot shows it is not the page TTFB that is shown, but the origin TTFB as no page level data was available:

    Zoom in on This URL/Origin section of above screenshot

    Perhaps you have some pages that take longer to generate server-side as they do a lot of processing (for reasons given above, or other reasons) but you're testing a quick page in this test.