My company tracks core web vitals as well as some metrics that aren't "core", including Time To First Byte (TTFB), First Contentful Paint (FCP), and Speed Index. While not technically even a "web-vital", Speed Index is measured by Page Speed Insights and Lighthouse, so I’d naively thought it would be included in the web-vitals library. It is not.
In fact, we use a customized version of a WebPageTest that was originally created in 2014. This library calculates a score that purports to calculate when the page appears usable. It does this with a formula based on images that load in the visible part of the page, weighted according to how much relative space each occupies and when it appears.
Consider a page with two images that together occupy 100,000 pixels. If one is small (say, it only accounts for 10% of the overall image coverage), it won’t affect the speed index much, even if it’s slow. If the larger image is slow to load, however, the speed index will increase in a way that’s a bit hard-to-follow but proportional.
It would be nice to have a library that better simulated Lighthouse or PageSpeedInsight scores, because the WebPageTest library feels pretty arbitrary in comparison. For example, I tested a page with no images but lots of text that gets loaded from the server. When I tweaked the server to delay the response by a few seconds, the page-speed metric indicated all was well — great, even. In other words, it's possible to have a page that shows nothing for a painfully long time and still get a decent score with this tool.
It might be that it's simply misnamed and in fact does one particular thing reasonably well, if that thing is exclusively concerned with when images become visible. In any case, I’m going to investigate whether we can stop using it or else try to find a substitute that comes up with a more holistic speed index.