The Updated Performance Metrics of Lighthouse 6.0

TOAST UI
6 min readJun 18, 2020

In the last weekly pick, I mentioned that the new improved features of Lighthouse will be updated.

This May, the Lighthouse 6.0 was officially released and is planned to be implemented to Chrome 84. If you are excited to try it out immediately, you can download the Chrome Nightly Build (Canary).

Brief Introduction of the Lighthouse

Lighthouse is a tool that is designed to audit and improve the web’s performance and can be found under the Chrome DevTools. It offers guidance and performance indices so that the pages that we build can be loaded faster and be interacted better by the users.

Such performance auditing tools are important to say the least. If you don’t see what the big deal is, try using the console.log to start the performance enhancement process for your webpage. You’ll see what I mean soon enough. You will soon be grateful for the numerous features and different timely information the Lighthouse provides.

Furthermore, the Lighthouse is not to be used only with Chrome DevTools, but its Node CLI can be used for the CI for automated performance testing.

npm install -g lighthouse
lighthouse https://www.example.com --view

It can be used as browser extensions, and currently Lighthouse is supported by Chrome and Firefox as extensions. While the internal implementations differ from browser to browser, the sheer idea of having Lighthouse available for Firefox is amazing.

What’s New with the New Version

If you take a look at the 6.0’s release notes, a lot has changed. The following is the list of major updates according to the post on the official blog.

  • New Metrics
  • Performance Score Update
  • Lighthouse CI
  • Renamed Chrome DevTools Panel
  • Mobile Emulation
  • Browser Extension
  • Budgets
  • Source Location Links
  • Source Map Collection for Unused JavaScript Module Detection

Silently, but steadily, the Chrome DevTools Audits Panel have changed its name to Lighthouse. While it has been on the stable since Chrome 81, it just goes to signify the importance of Lighthouse and the fact that its already high reputation.

This article will solely discuss the materials related to the new performance metrics.

Enter, the New Performance Metrics

As the web technology develops and changes, the metrics of what is considered important to the webpage inevitably change as well.

The standards of measuring performance, the performance metric, must be able to represent the webpage’s performance logically. The how fast or slow of the entire webpage; the how fast or slow of the interactions are color coated appropriately and represented as visuals. The lighthouse has been responsible for developing various metrics offering guidance for developers.

No More FMP

Speaking of metrics, the fact that the FMP(First Meaningful Paint) is deprecated seems like a cause for a celebration. This metric is too abstract, and it was incredibly difficult to measure in a scientific or in a self-explanatory way. Such difficulty measuring made FMP a difficult metric to standardize. Few years back, I gave a talk (in Korean) on improving the page’s FMP. However, the “meaningful rendering” differs among developers, project managers or clients, and services. It is also possible that users may feel that the page is slow despite the FMP displaying a fast quantity. Such dissonance represents why the FMP cannot be considered to be one of the metrics, and why we need a new metric.

In order to replace the FMP, Lighthouse came up with the new three metrics: LCP, CLS, and TBT as hinted in the alpha version.

LCP (Largest Contentful Paint)

Just by looking at the name, LCP, we can know that the standard is crystal clear. It measures how long the portion with the largest content takes to render and uses it as the metric. It uses the speed of loading the largest element on the screen as a metric, and it can be considered to be a much more viable metric than the FMP.

This article explains which elements are considered for the LCP.

  • img Element
  • image Element within the svg
  • video Element
  • Any element with a background image
  • Any block level with texts

The largest contentful element can change dynamically while the page loads. The following set of images show how the LCP candidate switches from one element to another.

(Source: https://web.dev/lcp/#examples)

With LCP, any page that has the LCP value that is lower than 2.5 seconds is considered to be fast.

(Source: https://web.dev/vitals)

Since this metric is simple, it can be standardized. The W3C Web Performance Working Group is currently working on standardizing the LCP and is also working on the specs for Largest Contentful Paint API. With the API, we can expect to be able to record the improvements in performances conveniently.

CLS (Cumulative Layout Shift)

CLS is a measure of how much content movement is present on the screen. This metric is user-centric, and it is offered because too much movement on the screen can irritate the users.

When I first read about CLS, I considered it to be a metric that is not very related to performance. However, under certain circumstances, poor CLS can be a massive problem. Let’s consider an example. The user tries to press a “Cancel Order” button as the page finishes loading. However, just as the page finishes loading, an ad finishes loading as well and gets in the way. The “Cancel Order” button that the user intended to press will either be covered or be pushed down, and, if the user is unlucky, will click on the “Confirm Purchase” button.

It would also be chaotic if the content suddenly moves down while reading the content like in the visual below.

(Source: https://web.dev/cls/)

The following is a stable (not much shifting around) webpage in terms of CLS.

(Source: https://web.dev/cls/)

If you are interested in learning how CLS is calculated, refer to this YouTube link.

TBT (Total Blocking Time)

TBT represents how responsive the page is while it is loading. This is important because pages that maintain its responsiveness even while it is loading and pages that cannot do so feel vastly different in terms of performance from the user perspective. This metric measures the total time the page remains unresponsive due to a halt in the main thread.

The longer the lengthy processes execute, the lower the responsiveness drops, and Lighthouse records every task within the main threat that takes over 50ms. Anything beyond 50ms will give an impression that the page is slow to the users fidgeting with mouses and keyboards.

(Source: https://web.dev/tbt/)

In terms of TBT, the guideline for responsive and convenient user experience happens under 300ms. Lighthouse recommends that every task during the loading process (that is over 50ms) to sum below 300ms.

The best performance optimization techniques to lower the TBT are the following.

  • Reducing the Resource Load
  • Reducing the JavaScript Run Time
  • Reducing Render Blocking Resources

Real-Life Optimization Case Study

Recently Dooray! went through optimization processes to make the DOMContentLoaded faster. Many techniques were used to improve the speed, and it also led to improved TBT.

Improving the JS modules that do not need to be initialized or loaded with the main contents while the page loads.

Conclusion

In order to offer better performance metrics and appropriate guidelines, the metrics will continue to change. It is my hope that Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), and Total Blocking Time (TBT) be used to help in developing better, faster, and more stable webpages.

--

--