Google has launched a new way of measuring Cumulative Layout Shift (CLS). But somehow I am not able to understand the new definition from the articles on web.dev and https://web.dev/evolving-cls/.
Could anyone help me understand this in simple words? or suggest any article or video?
Simple answer is it used to accumulate over the entire life of the page, but now will batch it into “windows” and report the worst window.
Let’s take the example of an infinite scroll page (I.e. one that loads more and more content as you scroll down the page - like a Twitter feed or a Facebook feed), or a Single Page App (SPA).
As you stay on the one page (even if it looks like different pages in the case of SPA) your CLS used to continue to grow and grow if you have any shift-inducing elements. There was no limit to how large it could grow.
Let’s say in an infinite scroll page every new element loaded a picture without width
and height
in the HTML (which will allow the browser to layout out the required space before the image is downloaded). Each new block inserted will therefore add a small amount of CLS to the page. If you stay on that page long enough you will eventually exceed the limit for “good” CLS and the page will be seen a poorly performing even if each shift is relatively small.
Of course you can prevent any shifts by ensuring the content is loaded in advance of scrolling into that area, or by reserving the space (e.g. adding width and height attributes for that images on each injected piece of content), but getting 0 CLS is tough (not impossible but tough!) and the longer lived the page, the more likely some extra CLS will be introduced and the more likely these will total to above the limit.
Similarly if you have an SPA with 5 virtual pages (e.g. a checkout app) then each page transition could introduce some CLS. Whereas if you’d coded it as 5 separate server side generated pages you’d get the CLS budget reset on each transition. Doesn’t seem fair to penalise you just because of your choice of technology does it if the experience to the user is the same for both?
Now it should be noted that user interactions allow a small time period where CLS is ignored. For example if you click on a “Show more details” button and it expands some hidden content then of course the user expects that shift so it doesn’t count. So an SPA can be coded to have zero CLS but, as I say, it’s just more likely to be an issue for these long lived pages.
So what the new CLS definition says is rather than have the whole total CLS just use the worst chunk of CLS within a “window” of time. Now that chunk might still be the result of several CLS items (so it’s still “cumulative” rather than measuring the worst CLS item) but it caps it.
So if on page load you get a CLS of 0.05 and 0.025 and 0.024 then a pause and then much later another CLS of 0.04 then you might have two CLS windows: one of 0.099 (0.05 + 0.025 + 0.024) and another with CLS of 0.04. The CLS will be reported as the worst of these (0.099), whereas in the old world it would have reported it as the total (0.139). This means in the new definition you just stay under the 0.1 limit for “good”, whereas in the old world you’d be into the “needs improvement” category.
Every page should in theory have the exact same CLS or better than before and no one should be worse for this. So it’s seen as a good improvement for long lived pages with no downsides.
However it does mean you might be hiding some CLS you weren’t aware of. Continuing the above example if you fix the page load CLS’s and bring that 0.099 down to 0 and think you’re all good, you might be surprised to see your CLS now reporting as 0.04 (the next biggest window). So can lead to a bit of feeling that you’re chasing your own tail! But that was similar to before anyway if you were reporting your worst CLS element and then optimising that and then finding the next element and continuing.
Ultimately I think it’s a good improvement and makes CLS limits easier to meet and rewards progress. It was a necessary change for long lived pages.