Google’s John Mueller Explains How Pages Blocked by Robots.txt Are Ranked via @MattGSouthern - Website Pro USA
Website Builder,SEO,Social Media Consultant, Hosting, Website Care Plans
32215
post-template-default,single,single-post,postid-32215,single-format-standard,theme-bridge,woocommerce-no-js,ajax_updown,page_not_loaded,,qode-content-sidebar-responsive,columns-3,qode-child-theme-ver-1.0.0,qode-theme-ver-9.2,hide_inital_sticky,wpb-js-composer js-comp-ver-7.9,vc_responsive

Google’s John Mueller Explains How Pages Blocked by Robots.txt Are Ranked via @MattGSouthern

Google’s John Mueller Explains How Pages Blocked by Robots.txt Are Ranked via @MattGSouthern

Google’s John Mueller recently explained how query relevancy is determined for pages blocked by robots.txt.

It has been stated that Google will still index pages that are blocked by robots.txt. But how does Google know what types of queries to rank these pages for?

That’s the question that came up in yesterday’s Google Webmaster Central hangout:

ADVERTISEMENT
CONTINUE READING BELOW

“Nowadays everyone talks about user intent. If a page is blocked by robots.txt, and is ranking, how does Google determine the query relevancy with page content as it’s blocked?”

In response, Mueller says Google obviously cannot look at the content if it’s blocked.

So what Google does is find other ways to compare the URL with other URLs, which is admittedly much harder when blocked by robots.txt.

In most cases, Google will prioritize the indexing of other pages of a site that are more accessible and not blocked from crawling.

Sometimes pages blocked by robots.txt will rank in search results if Google considers them worthwhile. That’s determined by the links pointing to the page.

ADVERTISEMENT
CONTINUE READING BELOW

So how does Google figure out how to rank blocked pages? The answer comes down to links.

Ultimately, it wouldn’t we wise to block content with robots.txt and hope Google knows what to do with it.

But if you happen to have content that is blocked by robots.txt, Google will do its best to figure out how to rank it.

You can hear the full answer below, starting at the 21:49 mark:

“If it’s blocked by robots.txt, then obviously we can’t look at the content. So we do have to kind of improvise and find ways to compare that URL with other URLs that are kind of trying to rank for these queries, and that is a lot harder.

Because it’s a lot harder it’s also something where, if you have really good content that is available for crawling and indexing, then usually that’s something we would try to kind of use instead of a random robotted page.

So, from that point of view, it’s not that trivial. We do sometimes show robotted pages in the search results just because we’ve seen that they work really well. When people link to them, for example, we can estimate that this is probably something worthwhile, all of these things.

So it’s something where, as a site owner, I wouldn’t recommend using robots.txt to block your content and hope that it works out well. But if your content does happen to be blocked by robots.txt we will still try to show it somehow in the search results.”

CategoryNewsSEO
No Comments

Sorry, the comment form is closed at this time.