Professor Sloth

Feature Release

Announcing Unified Web Performance: automatic lab testing, real user monitoring, and Google SEO scores.

Great Lighthouse scores, but users still complain your site is slow? Here’s the shocking truth: Google’s research found that 50% of websites with perfect Lighthouse scores still fail Core Web Vitals when measured with real user data. Half!

Synthetic Testing and Real User Monitoring are the most important tools in your performance toolbox. But they do different things and are useful at different times. Many developers only master one of these tools and miss critical performance problems—like trying to hammer in a screw.

Let’s look at these tools, what they measure, and when to use them.

Synthetic Testing

Synthetic Testing measures the performance of a website under controlled conditions. Examples include Lighthouse audits from Chrome DevTools, PageSpeed Insights, or Request Metrics Lab tests. The test simulates location, latency, bandwidth, browser, and device to approximate a visitor’s experience.

For synthetic tests to be accurate and valuable, you need to know things about your likely visitors: where they are, what network they’re on, and what device they’re using. Then the test must accurately simulate these characteristics. Both are difficult.

The internet is big and diverse, and developers don’t always know enough about our users. We make guesses, but because we often run on fast networks with new laptops, we overestimate our users’ capabilities. It’s fast on my machine.

Sloth on fast laptop

Plus, you likely have more than one type of user. Some visit from work laptops. Others try to login on phones from trains, or tablets with flaky coffee shop wi-fi. Each user has a different performance perspective and would need different test simulations.

Lighthouse tries to simulate an “85th percentile user”—but their simulation is usually wrong. Your users aren’t Google’s generic average. A developer-focused B2B tool has completely different users than a consumer social app. Lighthouse can’t know the difference, so it makes assumptions that are often wrong for your specific audience.

The biggest benefit is that you can run a synthetic test right now, regardless of whether you have users. The results will probably show your biggest performance problems.

The test will be flawed, and that’s okay—it gives you an idea of performance. Synthetic testing never tells you how fast your website really is—only how fast it might be under ideal conditions.

For more on synthetic testing limitations, see our detailed guide on The Limitations of Lighthouse.

Real User Monitoring

Real User Monitoring is just that: real. RUM records actual performance from users who visited your website. RUM doesn’t guess or simulate—it records the actual performance they experienced.

RUM works by adding JavaScript to your website that runs alongside your page content. This script uses built-in browser APIs to collect performance data as users interact with your site—page load times, interaction delays, animation performance, and errors.

When someone visits your page, the RUM script measures their real experience and sends that data to a reporting service. This gives you performance data that reflects real network conditions, real devices, and real user behavior patterns.

Real User Monitoring is more accurate than synthetic testing, but there’s also more noise and delay.

RUM data includes all users, even that person browsing your website from Mongolia on questionable internet. You’ll need statistics to understand what it means—medians, percentiles, and distributions. Used correctly, RUM tells you how your fastest users, typical users, and worst users experience your website.

The biggest limitation is delay. RUM can’t tell you how fast your site will be until users visit it. You have to release changes and measure impact to see if your site sped up—or not. Synthetic testing can guess at performance early, helping find obvious problems, but to prove your site is fast, you need RUM.

Why Chrome User Experience Report (CrUX) Isn’t Enough

The Chrome User Experience Report provides real user data, which sounds great—but it’s like the lite beer of Real User Monitoring. Sure, it’s better than nothing if you have a lot of it, but it lacks the substance you need to make real decisions.

CrUX does show data from your specific website, but only for your busiest pages and only as total aggregate performance. It’s like getting a report that says “your restaurant had mixed reviews this month” without knowing which dishes, which servers, or which times of day caused problems. You know something might be off, but you can’t do anything about it.

The 28-day reporting delay means you could have performance problems for weeks before CrUX shows them. In web time, that’s an eternity.

CrUX data is public, making it excellent for benchmarking against competitors. But that same public nature means it’s not detailed enough for optimization decisions about your specific site.

CrUX gives you the 30,000-foot view. Real User Monitoring gives you the ground-level intelligence you need to improve your site.

Signal vs Noise: The Real Difference

Synthetic testing and Real User Monitoring is about signal vs noise. Synthetic tests don’t have much noise—each Lighthouse test is a valid measurement for those conditions. Run the test again with the same conditions and you’ll get similar results.

But as Google’s research showed, there’s not much signal in those synthetic results either. That Lighthouse report isn’t how any user experiences your page (unless they’re browsing from your laptop on your network).

Real User Monitoring is the opposite. Each bit of RUM data shows how your website really performed for a visitor. But those visitors can be wildly different. Some have awesome experiences. Others think they’re still on dial-up.

The trick is: which users do you care about? If you’re building a site for corporate users in the United States, performance for mobile users in remote areas might not matter. RUM tools like Request Metrics help you filter noise and aggregate data for a clearer picture of your target users.

When Only RUM Will Do: Real-World Scenarios

Here are performance problems that synthetic testing will never catch:

Black Friday traffic spikes: Synthetic testing can’t simulate real load on your servers, payment processors, and CDN. RUM captures what happens when thousands of people try to buy your stuff simultaneously.

Dynamic content complexity: Your personalization engine might be fast for new users but slow for power users with complex histories. Maybe your A/B test variant tanks performance. Synthetic testing would never catch these issues.

Geographic infrastructure differences: Users in different regions experience vastly different network conditions. RUM shows which markets have performance problems you’d never see in controlled testing.

Third-party service reality: That chat widget, analytics script, or payment processor might work fine in testing but cause real user delays during outages or peak usage.

Device-specific issues: RUM reveals problems like “our site is slow on iPhone 12 specifically” or “users with older Android phones can’t complete checkout.”

The Complete Performance Strategy

Both synthetic testing and Real User Monitoring are valuable for developers building fast websites. Use them strategically:

Use synthetic testing to test changes before release. It catches obvious mistakes and gives you consistent testing conditions during development. Think of it as your safety net.

Use Real User Monitoring tools like Request Metrics to see if changes really sped things up. You don’t know how fast your website is until your visitors tell you.

The winning approach:

  1. Develop with synthetic testing to prevent shipping obvious problems
  2. Deploy and measure with RUM to see real-world impact
  3. Investigate with synthetic testing when RUM spots problems
  4. Validate fixes with RUM to ensure improvements help real users

For a deeper dive into why RUM is essential for understanding your site’s true performance, read our comprehensive guide: Why You Need Real User Monitoring to Really Understand Your Web Performance.

The Bottom Line

Stop guessing about your website’s performance. Synthetic testing gives you development confidence. Real User Monitoring gives you production truth.

Ready to see how your website actually performs for real users? Start monitoring with Request Metrics and discover what synthetic testing has been missing.