Professor Sloth

Feature Release

Announcing Unified Web Performance: automatic lab testing, real user monitoring, and Google SEO scores.

You’re reading an old performance article, and it keeps talking about “First Meaningful Paint.” You search for how to improve it, but every tool gives you different advice. Some don’t mention it at all. What’s going on?

Here’s the short answer: First Meaningful Paint is dead. Google deprecated it in Lighthouse 6.0 back in 2020 and removed it completely in Lighthouse 13. If you’re still trying to optimize for FMP, you’re chasing a ghost.

What Lighthouse Used to Tell You

The old FMP audit message said:

First Meaningful Paint measures when the primary content of a page is visible.

FMP tried to answer a simple question: when does the page look “done enough” that users can start reading or interacting? It watched for the moment when the biggest visual change happened above the fold, assuming that’s when the “meaningful” content appeared.

In theory, this was brilliant. In practice, it was a mess.

Why Google Killed FMP

First Meaningful Paint had three fatal flaws that made it unreliable for real-world performance measurement.

1. Wildly Inconsistent Results

FMP was “bimodal,” meaning you’d run the same test twice and get completely different numbers. A page might score 1.2 seconds on one run and 2.8 seconds on the next. This inconsistency made it nearly impossible to know if your optimizations were actually working or if you just got lucky.

The metric was overly sensitive to tiny changes in page load order. Shift one CSS file around, and your FMP could jump by a full second. That’s not useful data, that’s noise.

2. Tied to Chrome’s Internals

FMP relied on specific rendering events inside Blink, Chrome’s rendering engine. These events were implementation details that other browsers couldn’t replicate. Firefox and Safari had no way to report FMP because the underlying mechanism simply didn’t exist outside of Chromium.

A metric that only works in one browser family can’t become a web standard. And Google wanted metrics that the entire web could use, not just Chrome users.

3. “Meaningful” Is Subjective

What counts as “meaningful” varies wildly between websites. For a news site, it’s the headline. For an e-commerce site, it’s the product image. For a dashboard, it might be a chart or data table. FMP’s algorithm tried to guess what mattered on your page, and it often guessed wrong.

The metric would sometimes identify a loading spinner or navigation bar as the “meaningful” content, completely missing the actual content users cared about.

What Replaced FMP

Google replaced First Meaningful Paint with Largest Contentful Paint (LCP), which became one of the three official Core Web Vitals.

LCP takes a simpler, more reliable approach: instead of guessing what’s “meaningful,” it measures when the largest visible element renders. This is usually your hero image, main headline, or featured content block.

Why LCP is better:

Problem with FMP How LCP Fixes It
Inconsistent results Deterministic measurement based on element size
Chrome-only implementation Standardized API that works across browsers
Subjective “meaningful” definition Objective “largest element” measurement
Not useful for real optimization Clear target you can actually improve

LCP isn’t perfect. Sometimes the “largest” element isn’t the most important one. But it’s consistent, measurable, and actually tells you something actionable about your page performance.

What You Should Do Now

If you’re still seeing FMP mentioned anywhere, here’s how to handle it:

Update Your Monitoring Tools

Any performance monitoring that still reports FMP is running outdated software. Modern tools like Request Metrics, PageSpeed Insights, and current Lighthouse versions all use LCP instead. If your tooling still shows FMP, it’s time to upgrade.

Ignore Old Articles

Performance articles from before 2020 will mention FMP prominently. The optimization advice in those articles is often still valid (faster servers, smaller images, fewer blocking resources), but the metric they’re targeting no longer exists. Focus on LCP instead.

Focus on the Right Metrics

  1. First Contentful Paint (FCP): When the user sees anything render. This tells them their request was received.

  2. Largest Contentful Paint (LCP): When the main content is visible. This is the replacement for FMP.

  3. Time to First Byte (TTFB): How long until your server responds. This affects everything else.

These three metrics together give you a complete picture of loading performance. FCP shows the first response, LCP shows when content is ready, and TTFB tells you if your server is the bottleneck.

Check Your LCP Score

If you were worried about FMP, you should be checking LCP instead. Run your site through PageSpeed Insights or use Chrome DevTools Lighthouse panel to see your current LCP score.

Good LCP thresholds:

LCP Time Rating
Under 2.5s Good
2.5s to 4s Needs Improvement
Over 4s Poor

If your LCP is slow, check out our guide on Understanding Lighthouse: Largest Contentful Paint for specific fixes.

The Lesson Here

Web performance metrics evolve. What Google recommended five years ago might be deprecated today. FMP seemed like a great idea until real-world data showed it was too unreliable to be useful.

This is why Real User Monitoring matters more than chasing specific metric scores. The metrics will change. The goal stays the same: make your site feel fast for actual users.

LCP will probably get refined or replaced eventually too. The important thing is understanding why these metrics exist (to approximate user experience) rather than obsessing over the specific measurement. A site that loads quickly for real users will score well on whatever metric Google invents next.

Keep Improving

Stop chasing FMP. It’s gone, and it’s not coming back.

Instead, focus on the metrics that actually influence your search rankings and user experience today:

  1. Set up monitoring with Request Metrics to track your Core Web Vitals (LCP, CLS, INP) for real users.

  2. Run Lighthouse audits regularly to catch performance regressions before they affect users.

  3. Check Search Console to see how Google rates your Core Web Vitals based on real Chrome user data.

The web moves fast. Your performance strategy should too.