You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now that we have a few months of data for secondary pages, let's analyze it to see if we should make any changes to the selection algorithm. We can pick a few popular home pages and see if the secondary pages are what we'd expect.
The custom metric that generates the candidate list of secondary pages is crawl_links.js.
Other questions to look into:
How often does a site not have any secondary pages? What's causing that?
How much does the inclusion of secondary pages change basic web health stats?
technology adoption
page weight trends
CWV performance/availability
Lighthouse scores
Is the algorithm suitable for both desktop and mobile? Do we need a mobile-specific one?
The text was updated successfully, but these errors were encountered:
Article is largest content and this is correctly loaded as secondary page, but paywalled. Though checking out the waterfall it looks like full article loaded so hopefully still useful.
Article is largest content and this is correctly loaded as secondary page, but paywalled. Though checking out the waterfall it looks like full article loaded so hopefully still useful.
News Article is LCP element. This was correctly loaded as secondary page - though again hidden behind a signup banner in the preview.
So from these spot checks, it seems to be working as intended.
However, the variability could make things interesting. We kinda guessed Amazon would load a product page, but cause it's a jack of all trades, it sometimes loads an Amazon Prime video, sometimes a category, and whatever else they are advertising in the banner. And interesting that Amazon, unlike the others, has different LCP link on mobile and desktop. Though that could be just due to the crawl time differences (though only 4 hours apart and on same day in this month).
Similarly the prevalence of pay walls and content blockers could be a lot higher on secondary pages as shown here. While resources all seem to load in the background, it could impact LCP elements and CLS and the like.
Still, from these brief checks it seems to be doing what we wanted it to do. So next thing would be to look at the whole dataset for interesting insights as to how they differ from home pages!
Now that we have a few months of data for secondary pages, let's analyze it to see if we should make any changes to the selection algorithm. We can pick a few popular home pages and see if the secondary pages are what we'd expect.
The custom metric that generates the candidate list of secondary pages is crawl_links.js.
Other questions to look into:
The text was updated successfully, but these errors were encountered: