Pages excluded from Google's index due to "different canonicalization than user"
-
Hi MOZ community,
A few weeks ago we noticed a complete collapse in traffic on some of our pages (7 out of around 150 blog posts in question). We were able to confirm that those pages disappeared for good from Google's index at the end of January '18, they were still findable via all other major search engines.
Using Google's Search Console (previously Webmastertools) we found the unindexed URLs in the list of pages being excluded because "Google chose different canonical than user". Content-wise, the page that Google falsely determines as canonical instead has little to no similarity to the pages it thereby excludes from the index.
About our setup:
We are a SPA, delivering our pages pre-rendered, each with an (empty) rel=canonical tag in the HTTP header that's then dynamically filled with a self-referential link to the pages own URL via Javascript. This seemed and seems to work fine for 99% of our pages but happens to fail for one of our top performing ones (which is why the hassle
).
What we tried so far:
- going through every step of this handy guide: https://moz.com/blog/panic-stations-how-to-handle-an-important-page-disappearing-from-google-case-study --> inconclusive (healthy pages, no penalties etc.)
- manually requesting re-indexation via Search Console --> immediately brought back some pages, others shortly re-appeared in the index then got kicked again for the aforementioned reasons
- checking other search engines --> pages are only gone from Google, can still be found via Bing, DuckDuckGo and other search engines
Questions to you:
- How does the Googlebot operate with Javascript and does anybody know if their setup has changed in that respect around the end of January?
- Could you think of any other reason to cause the behavior described above?
Eternally thankful for any help!
-
Hi SvenRi, that's an interesting one! The message you're getting from Google suggests that, rather than not finding the canonical tag, the system has reason to believe that the canonical is not representative of the best content.
One thing I'd bear in mind is that Google doesn't take canonical tags as gospel, but rather guidance, so it can just ignore them without there necessarily being a problem in how you've implemented that tag. Another is that while Google says that their crawlers can parse JavaScript, there's evidence that it doesn't parse the page content perfectly.
What happens when you fetch and render the pages in question using Search Console (both the page you want to rank and the page Google is selecting)? Can you see all of the content? Google uses the same JavaScript rendering as Chrome 41 (see here) have you tried accessing with that? You could also try a tool like Screaming Frog with JavaScript rendering switched on to see what kind of page content comes back. It could be worth making sure the canonical is generated properly but I'd also be checking that the page content is being rendered properly to make sure Google is seeing the pages as different as you describe. I'd also check to make sure there isn't a second, conflicting, canonical tag on the page. I know some SPA frameworks can have issues with double-opening HTML tags when one page is accessed after another, that could be something that would confuse a crawler so you could double-check that.
As ever, there are the rumours that Google will start giving much more weight to mobile in terms of indexing. Given your question about things changing recently - does your site have desktop and mobile parity?
If it looks as though everything is kosher, is it possible that the page Google is suggesting is much more heavily linked to internally or externally? If internally you could consider reviewing your internal linking (Will wrote a post about ways to think about internal linking here). You could use a tool like Majestic to look at who is linking to these pages externally, it may be worth double checking that all the links are genuine.
TL;DR I would start with the whole page content, not just the search directives, to make sure that's always being understood properly, then I would look in to linking. These are mainly areas of investigation and next debug steps, hopefully they'll help narrow down the search for you!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google suddenly indexing 1,000 fewer pages. Why?
We have a site, blog.example.org, and another site, www.example.org. The most visited pages on www.example.org were redesigned; the redesign landed May 8. I would expect this change to have some effect on organic rank and conversions. But what I see is surprising; I can't believe it's related, but I mention this just in case. Between April 30 and May 7, Google stopped indexing roughly 1,000 pages on www.example.org, and roughly 3,000 pages on blog.example.org. In both cases the number of pages that fell out of the index represents appx. 15% of the overall number of pages. What would cause Google to suddenly stop indexing thousands of pages on two different subdomains? I'm just looking for ideas to dig into; no suggestion would be too basic. FWIW, the site is localized into dozens of languages.
Intermediate & Advanced SEO | | hoosteeno0 -
Indexed Pages Different when I perform a "site:Google.com" site search - why?
My client has an ecommerce website with approx. 300,000 URLs (a lot of these are parameters blocked by the spiders thru meta robots tag). There are 9,000 "true" URLs being submitted to Google Search Console, Google says they are indexing 8,000 of them. Here's the weird part - When I do a "site:website" function search in Google, it says Google is indexing 2.2 million pages on the URL, but I am unable to view past page 14 of the SERPs. It just stops showing results and I don't even get a "the next results are duplicate results" message." What is happening? Why does Google say they are indexing 2.2 million URLs, but then won't show me more than 140 pages they are indexing? Thank you so much for your help, I tried looking for the answer and I know this is the best place to ask!
Intermediate & Advanced SEO | | accpar0 -
Google Search Console "Change of Address" - Failed Redirection Test
I have a client who has a lot of domain variations, which have all been set up in Google Search Console. I requested that the client use the COA feature in GSC for the domains that are now redirecting to other domains that they own (which are set up in GSC). The problem is that we're not redirecting the homepages to the homepages of the destination domains. So, GSC is giving us this error message: fails redirection test: The old site redirects to www.domain.com/blog, which does not correspond to the new site you chose. Is our only way to use GSC COA for these domains to change the homepage redirect to go to the homepage of the destination domain? We don't really want that since the domain we're redirecting is a "blog.domain1.com" subdomain and we want to redirect it to "domain2.com/blog". Any help appreciated! Thanks,
Intermediate & Advanced SEO | | kernmedia
Dan0 -
Category Page - Optimization the product's anchor.
Hello, Does anybody have real experience optimizing internal links in the category page? The category pages of my actual client uses a weird way to link to their own products. Instead of creating diferents links (one in the picture, one in the photo and one in the headline), they create only one huge link, using everything as anchor (picture, text, price, etc.). URL: http://www.friasneto.com.br/imoveis/apartamentos/para-alugar/campinas/ This technique can reduce the total number of links in the page, improving the strenght of the other links, but also can create a "crazy" anchor text for the product. Could I improve my results creating the standard category link (one in the picture, one in the photo and one in the headline)? Hope it's not to confuse.
Intermediate & Advanced SEO | | Nobody15569049633980 -
Pages are being dropped from index after a few days - AngularJS site serving "_escaped_fragment_"
My URL is: https://plentific.com/ Hi guys, About us: We are running an AngularJS SPA for property search.
Intermediate & Advanced SEO | | emre.kazan
Being an SPA and an entirely JavaScript application has proven to be an SEO nightmare, as you can imagine.
We are currently implementing the approach and serving an "escaped_fragment" version using PhantomJS.
Unfortunately, pre-rendering of the pages takes some time and even worse, on separate occasions the pre-rendering fails and the page appears to be empty. The problem: When I manually submit pages to Google, using the Fetch as Google tool, they get indexed and actually rank quite well for a few days and after that they just get dropped from the index.
Not getting lower in the rankings but totally dropped.
Even the Google cache returns a 404. The question: 1.) Could this be because of the whole serving an "escaped_fragment" version to the bots? (have in mind it is identical to the user visible one)? or 2.) Could this be because we are using an API to get our results leads to be considered "duplicate content" and that's why? And shouldn't this just result in lowering the SERP position instead of a drop? and 3.) Could this be a technical problem with us serving the content, or just Google does not trust sites served this way? Thank you very much! Pavel Velinov
SEO at Plentific.com1 -
After Receiving a "Googlebot can't access your site" would this stop your site from being crawled?
Hi Everyone,
Intermediate & Advanced SEO | | AMA-DataSet
A few weeks ago now I received a "Googlebot can't access your site..... connection failure rate is 7.8%" message from the webmaster tools, I have since fixed the majority of these issues but iv noticed that all page except the main home page now have a page rank of N/A while the home page has a page rank of 5 still. Has this connectivity issues reduced the page ranks to N/A? or is it something else I'm missing? Thanks in advance.0 -
Does Google index more than three levels down if the XML sitemap is submitted via Google webmaster Tools?
We are building a very big ecommerce site. The site has 1000 products and has many categories/levels. The site is still in construccion so you cannot see it online. My objective is to get Google to rank the products (level 5) Here is an example level 1 - Homepage - http://vulcano.moldear.com.ar/ Level 2 - http://vulcano.moldear.com.ar/piscinas/ Level 3 - http://vulcano.moldear.com.ar/piscinas/electrobombas-para-piscinas/ Level 4 - http://vulcano.moldear.com.ar/piscinas/electrobombas-para-piscinas/autocebantes.html/ Level 5 - Product is on this level - http://vulcano.moldear.com.ar/piscinas/electrobombas-para-piscinas/autocebantes/autocebante-recomendada-para-filtros-vc-10.html Thanks
Intermediate & Advanced SEO | | Carla_Dawson0 -
Is 404'ing a page enough to remove it from Google's index?
We set some pages to 404 status about 7 months ago, but they are still showing in Google's index (as 404's). Is there anything else I need to do to remove these?
Intermediate & Advanced SEO | | nicole.healthline0