Field Data vs Lab Data: The 2026 Studio Guide to Reading Performance Like a Grown-Up
Lab data tells you what you changed. Field data tells you whether it mattered. Here is how to use both without fooling yourself.

Two Sources, Different Stories
There are two ways to measure web performance: run a test in controlled conditions (lab data), or collect measurements from real visitors in production (field data). Both are useful. Neither is complete on its own. The mistake most studios make is treating them as interchangeable, or worse, only looking at one.
Lab data comes from tools like Lighthouse, WebPageTest, and Chrome DevTools. You specify the page, the connection speed, and the device profile. The tool loads the page once, measures everything, and gives you a score. The results are reproducible, detailed, and actionable. But they describe a synthetic scenario, not what real visitors experience.
Field data comes from the Chrome User Experience Report (CrUX), real user monitoring (RUM) services, or the web-vitals JavaScript library. It represents actual page loads by actual people on actual devices and networks. It is noisy, aggregated, and harder to act on. But it tells the truth about user experience in a way lab tests cannot.
Why the Numbers Disagree
Run Lighthouse on your portfolio site. It might report an LCP of 1.4 seconds and a perfect 100 performance score. Check the same page in CrUX. It might report an LCP of 3.2 seconds at the 75th percentile.
That is not a bug. It is the gap between controlled conditions and reality:
Network variance. Lighthouse simulates a fixed connection speed (typically "Simulated Throttling" at roughly 1.6 Mbps). Real visitors range from fiber connections at 100+ Mbps to mobile connections on trains and in buildings where effective throughput drops below 1 Mbps.
Device variance. Lighthouse simulates a fixed CPU profile. Real visitors include everything from current flagship phones to three-year-old budget Android devices. A JavaScript task that takes 50 milliseconds on Lighthouse's simulated device might take 250 milliseconds on a real budget phone.
Cache state. Lighthouse tests with an empty cache. Real visitors may have cached your CSS, fonts, and images from a previous visit. First-visit performance and repeat-visit performance are fundamentally different metrics.
User behavior. Lighthouse loads the page and waits. Real visitors scroll, click, resize the viewport, and switch tabs. INP only appears in field data because lab tools do not simulate user interactions (unless you explicitly script them).
Population sampling. CrUX reports the 75th percentile, meaning 75% of page loads were faster than the reported value. That is a stricter bar than the median. If your site performs well for most visitors but poorly for the bottom 25%, CrUX will reflect the slower end.
When to Use Lab Data
Lab data excels at:
Diagnosing specific issues. The Lighthouse performance panel, network waterfall, and timing breakdown let you pinpoint exactly which resource or script is causing a problem. Field data tells you there is a problem. Lab data tells you why.
A/B testing changes. When you optimize an image format, add a preload hint, or defer a script, running Lighthouse before and after shows you the isolated effect of that change. Field data takes 28 days to update and is influenced by dozens of variables beyond your change.
CI/CD gates. Automated Lighthouse runs in your deployment pipeline catch regressions before they reach production. Set thresholds for each metric and block deploys that exceed them.
Debugging specific user flows. WebPageTest and Chrome DevTools let you script multi-page flows (home page, project page, contact page) and measure each step. This is essential for understanding cumulative performance across a user journey.
When to Use Field Data
Field data excels at:
Validating improvements. A lab improvement that does not show up in field data was not a real improvement for your users. Field data is the ultimate arbiter.
Identifying pages with problems. CrUX provides per-URL performance data. Scan your entire site to find which pages fail Core Web Vitals thresholds, then investigate those pages in the lab.
Understanding device and network distribution. CrUX segments data by connection type and device category. If your mobile field data is significantly worse than desktop, you know where to focus optimization efforts.
Tracking trends over time. Field data provides a continuous signal. Plot your CrUX metrics over time to see whether performance is improving, degrading, or stable. This catches slow regressions that single lab tests miss.
The Practical Workflow
Here is how we use both data sources on client projects:
1. Establish baselines with field data
Check CrUX (via PageSpeed Insights or BigQuery) for all three Core Web Vitals at both origin and URL level. Identify which metrics are failing and on which pages.
2. Diagnose with lab data
For each failing page, run Lighthouse and a Chrome DevTools performance trace. Identify the specific resources, scripts, or layout patterns causing the problem.
3. Fix and validate in the lab
Implement the fix, run Lighthouse again, confirm the lab metric improved. This is your fast feedback loop.
4. Validate in the field
Deploy the fix and wait for field data to update. CrUX uses a 28-day rolling window, so meaningful changes take 2 to 4 weeks to appear. Monitor the trend.
5. Set up ongoing monitoring
Use RUM (the web-vitals library or a commercial RUM service) for continuous field monitoring. Set alerts for metric regressions. Run Lighthouse CI on every deployment for lab-side regression detection.
Common Mistakes
Optimizing for Lighthouse score instead of field metrics. A 100 Lighthouse score is satisfying but meaningless if your field CrUX data still shows failures. The score is a heuristic. The field data is reality.
Ignoring field data because "our visitors are on fast connections." You do not know this unless you have measured it. Even if your primary audience is design professionals on desktop, they visit your site from coffee shops, airports, hotel wifi, and mobile devices. The long tail of slow visits is what CrUX captures.
Changing multiple things at once and attributing field improvement to one change. Field data is influenced by everything: your changes, seasonal traffic patterns, browser updates, visitor demographics. Isolate variables in lab testing, validate direction in field data, but do not over-attribute.
Running Lighthouse once and treating the result as fact. Lighthouse results vary between runs due to network conditions, background processes, and simulation variance. Run it 3 to 5 times and take the median.
As we covered in the Core Web Vitals playbook, the goal is not to chase a number but to ensure real visitors experience your site the way you designed it. Both data sources contribute to that goal in different ways.
Tools Summary
Lab data tools:
- Lighthouse (built into Chrome DevTools)
- WebPageTest for detailed waterfall analysis
- Chrome DevTools Performance panel for interaction profiling
- Lighthouse CI for automated regression testing
Field data tools:
- CrUX via PageSpeed Insights (easiest access)
- CrUX via BigQuery (most flexible)
- web-vitals JavaScript library (real-time RUM)
- Commercial RUM providers (Datadog, SpeedCurve, etc.)
FAQ
How long does it take for CrUX data to reflect a change?
CrUX uses a 28-day rolling window. A significant improvement typically becomes visible 2 to 3 weeks after deployment, depending on traffic volume.
Can I get field data for low-traffic pages?
CrUX requires a minimum sample size to report data. Low-traffic pages may not have enough data for per-URL reporting. In that case, use the origin-level data as a proxy, or deploy the web-vitals library for your own RUM collection.
Is Lighthouse v12 more accurate than earlier versions?
Each Lighthouse version improves scoring calibration and throttling accuracy. But lab data will always differ from field data because it simulates conditions rather than measuring them.
The Point
Performance measurement is a conversation between two perspectives. Lab data gives you control and specificity. Field data gives you truth and breadth. Use them together. Trust neither in isolation. The studios that get performance right are the ones that have learned to read both fluently.