Largest Contentful Paint Internals: Why Optimizations Fail Over Time

Published on March 30, 2026

Largest Contentful Paint element selection process diagram

You spent hours optimizing your hero image, preloading it, and compressing it to perfection. Still, your Largest Contentful Paint (LCP) score refuses to improve. In some cases, it even gets worse.


This frustration usually comes from a simple misunderstanding. LCP is not the biggest visual element you think should matter.


LCP is a dynamic metric calculated by the browser, which identifies the single largest image or block of text visible in the viewport at the moment it finishes rendering.


The internals here matter immensely. Browsers continuously monitor candidate elements as the page loads. The final LCP element is the last of these largest candidates to be painted.


Because of this, a seemingly small detail can take over. A large paragraph that depends on a slow-loading font, or a background image applied to a big container, can become your LCP even if your main hero image appears earlier. The browser does not care about design importance. It cares about timing and size at the moment of rendering.

Most developers optimize the element they believe is the most important, instead of the element that is actually slowing things down. This is why many LCP optimizations fail.


How the Browser Really Chooses the Largest Element

You might spend hours perfecting your hero banner, only to see no improvement in your LCP score. The reason is simple. Your browser may not be treating that banner as the main element at all.


Instead of focusing on one obvious candidate, the browser constantly evaluates multiple elements as the page loads. It is essentially running a real-time contest, choosing whichever large element finishes rendering last within the visible area.


This situation is sometimes called an LCP hijack. Your carefully optimized hero element can be replaced by something unexpected, such as:

  • A large block of text that loads quickly
  • A dynamically inserted component
  • A background image that renders before other elements

The solution is not to optimize what you assume is the LCP element. The solution is to identify the element the browser is actually measuring.


Why Lighthouse and Real User Data Tell Different Stories

You run Lighthouse and see a beautiful green score. Everything looks fast. Then you check real user data, and the numbers tell a completely different story. this mismatch is not an error it’s a predictable state of how Lighthouse works.

Lighthouse lab data vs real user performance comparison


Lighthouse is a diagnostic tool.
It tests your page in a controlled environment using a fixed device profile, stable network conditions, and a clean browser state. This setup makes results consistent and repeatable.


However, that same consistency also limits realism.


Real users do not live in perfect conditions. Their experience includes:

  • Unstable networks with changing speeds and latency
  • Older devices with slower processors and limited memory
  • Third-party scripts that behave differently across regions and sessions
  • Browsers filled with extensions and background tabs

Lighthouse shows what your site can achieve under ideal circumstances. Real users show what your site actually delivers. Your goal is to understand why real-world performance behaves differently and fix the causes that matter to users.


The Hidden Delays That Slow Everything Down


You optimized the main image. You compressed files. You inlined critical CSS. You followed every checklist item. Still, the score does not move.


At this point, the problem is often not the main element itself. The delay usually comes from many small resources that block rendering.


These files may look harmless on their own, but together they stretch the critical rendering path and delay the moment when the main element can appear on screen.


This issue is rarely one single large file. It is usually a chain of small dependencies/files that quietly slow everything down.


When Lazy Loading Delays the First Impression

Lazy loading is a useful technique. It delays loading images or videos that are not immediately visible, saving time and bandwidth. For content below the initial viewport, this approach works well.

lazy loading


Problems start when the element that appears first on screen is accidentally set to lazy load.


Consider a few scenarios:

  • On smaller screens, a different image may appear at the top
  • A promotion banner might shift content positions
  • A layout change might move a new element into the viewport

If that new element is configured to load lazily, the browser will wait before displaying it. The page may look partially loaded, creating a slow first impression.


To catch this issue, observe how your page loads on different devices. Watch closely for large elements that appear with a delay. Those delayed elements are often the true source of the problem.


The fix is not to load everything immediately. The fix is to ensure the very first visible element loads as quickly as possible on every screen size.


Stop Guessing and Start Analysing Your Initial Calls

You optimized your hero image carefully, yet the LCP score still increased.


This situation is not random. It means the bottleneck moved somewhere else.


Think of it like a relay race. You improved one runner, but another runner became the slowest link in the chain.


LCP is not unpredictable. It is dynamic. When you fix one delay, another element can take its place.


Real progress comes from measuring the actual rendering delays instead of relying on assumptions.


When LCP Alone Is Not Enough

A fast LCP score often feels like success. The page loads quickly, so everything must be fine.


In many modern applications, especially single-page applications, this assumption can be misleading.


The page may look ready, but the application might still be processing scripts, loading data, or attaching event listeners. During this time, the interface appears interactive but does not respond.


This creates a frustrating experience. Users see the page and try to interact with it, but nothing happens.

In these cases, the problem is not visual loading speed. The problem is interaction readiness. For interactive applications, performance measurement should include metrics beyond LCP, such as responsiveness and stability.


Wrapping Up

Optimizing LCP is not just about speed. It is about loading the right resource at the right time.


Aggressive optimizations can help performance, but they can also delay the element that matters most.


Real success comes from identifying what is truly critical when the page first becomes visible.


FAQ

How does the browser specifically determine which element is the Largest Contentful Paint (LCP) element, especially with dynamically loaded content?
The browser determines the LCP element by assessing various candidates during page load, often surprising developers by picking dynamically loaded content or unexpected elements rather than the intended hero image. This internal selection process means the "hero" isn't always the LCP element.
What are the practical implications of the differences between LCP measurements in lab tools (like Lighthouse) versus real-world (CrUX) data?
The practical implication is that a "pristine" LCP score in lab tools like Lighthouse, which optimize for ideal conditions, might not reflect real user experiences. This means optimizing solely for lab data can create a "performance mirage" for actual users.
Why might my LCP score suddenly worsen after implementing what I thought was an LCP optimization, such as image lazy loading?
LCP scores can worsen after optimization because techniques like lazy loading, if applied incorrectly, can inadvertently delay the loading of the actual LCP element. This common pitfall causes the browser to wait longer for the most prominent content.
How can excessive render-blocking JavaScript or CSS assets significantly delay LCP, even if the LCP element itself is small and optimized?
Excessive render-blocking JavaScript or CSS assets can significantly delay LCP by forcing the browser to pause rendering until these files are processed. These "critical path killers" prevent the LCP element from being painted even if it's already optimized.
In what specific scenarios might optimizing solely for LCP lead to a suboptimal user experience, requiring focus on other Core Web Vitals like CLS or FID?
While the blog focuses on LCP, it implies that even achieving a good LCP score doesn't guarantee a universally optimal experience, especially when Lighthouse optimizes for a "ghost user." This suggests that other aspects beyond just LCP, though not explicitly detailed, could still lead to a suboptimal user experience.

linkedin | github | twitter