If Google's index contains multiple URLs for my homepage, does that mean the canonical tag is not working?
-
I have a site which is using canonical tags on all pages, however not all duplicate versions of the homepage are 301'd due to a limitation in the hosting platform. So some site visitors get www.example.com/default.aspx while others just get www.example.com. I can see the correct canonical tag on the source code of both versions of this homepage, but when I search Google for the specific URL "www.example.com/default.aspx" I see that they've indexed that specific URL as well as the "clean" one. Is this a concern... shouldn't Google only show me the clean URL?
-
In most cases, Google does seem to "de-index" the non-canonical URL, if they process they tag. I put in quotes just because, technically, the page is still in Google's index - as soon as it's not showing up at all (including with "site:"), though, I essentially consider that to be de-indexed. If we can't see it, it might as well not be there.
If 301-ing isn't an option, I'd double-check a few things:
(1) Is the non-canonical page ranking for anything (including very long-tail terms)?
(2) Are there any internal links to the non-canonical URL? These can send a strongly mixed signal.
(3) Are there any other mixed signals that might be throwing off the canonical? Examples include canonicals on other pages that contradict this one, 301s/302s that override the canonical, etc.
-
As Digital-Diameter said, the best choice for fixing this problem is a 301. A Canonical tag can eventually lead to the incorrect URL being replaced by the correct one in the SERPs but it is also important to note that the Rel=canonical tag is a suggestion, not a directive. What this means is that the search engines will take it into consideration but may choose not to follow it.
-
Technically, rel=canonical tags can still leave a page indexed, they simply pass authority for Google. From your question I can tell you know this, but I do have to say that 301's are the best way to address this. Blocking a page with robots.txt can help as well, but this just stops Google from crawling a page, the page can still be indexed again.
If you have pages or versions of pages that you do not want indexed you may want to use the no index meta tag. Google's notes here. Be careful though, this will stop these pages from being indexed, but they will still be crawled (though your rel=canonical solution should make this a non-issue).
A few other notes:
In all cases, be sure your internal links point consistently to the URL version you have determined for your home page.
WMT also creates a list of inbound links that are missing or broken. You can use this to help determine any additional 301s that you need.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why are Google SERP Sitelinks "Not Working?"
Hi, I'm hoping someone can provide some insight. I Google searched "citizenpath" recently and found that all of our our sitelinks have identical text. The text seems to come from the site footer. It isn't using the meta descriptions (we definitely have) or even a Google-dictated snippet from the page. I understand we don't have "control" of this. It's also worth mentioning that if you search a specific page like "contact us citizenpath" you'll get a more appropriate excerpt. Can you help us understand what is happening? This isn't helpful for Google users or CitizenPath. Did the Google algorithm go awry or is there a technical error on our site? We use up-to-date versions of Wordpress and Yoast SEO. Thanks! search.png
Technical SEO | | 123Russ0 -
How to use Google search console's 'Name change' tool?
Hi There, I'm having trouble performing a 'Name change' for a new website (rebrand and domain change) in Google Search console. Because the 301 redirects are in place (a requirement of the name change tool), Google can no longer verify the site, which means I can't complete the name change? To me, step two (301 redirect) conflicts with step there (site verification) - or is there a way to perform a 301 redirect and have the tool verify the old site? Any pointers in the right direction would be much appreciated. Cheers Ben
Technical SEO | | cmscss0 -
301 Redirect Url Within a Canonical Tag
So this might sounds like a silly question... A client of mine has a duplicate content issue which will be fixed using canonical tags. We are also providing them with an updated URL structure meaning rwe will be having to do lots of 301 redirects. The URL structure is a much larger task that than the duplicate content so i planned to set up the canonicals first. Then it occurred to me id be updating the canonical tags with the urls from the old structure which brings me to my question. Will the canonical tags with the old urls redirect credit to the new urls with the 301? Or should i just wait until we have the new url structure in place and use these new urls in the canonicals? Thanks!
Technical SEO | | NickG-1230 -
My old URL's are still indexing when I have redirected all of them, why is this happening?
I have built a new website and have redirected all my old URL's to their new ones but for some reason Google is still indexing the old URL's. Also, the page authority for all of my pages has dropped to 1 (apart from the homepage) but before they were between 12 to 15. Can anyone help me with this?
Technical SEO | | One2OneDigital0 -
Pages to be indexed in Google
Hi, We have 70K posts in our site but Google has scanned 500K pages and these extra pages are category pages or User profile pages. Each category has a page and each user has a page. When we have 90K users so Google has indexed 90K pages of users alone. My question is. Should we leave it as they are or should we block them from being indexed? As we get unwanted landings to the pages and huge bounce rate. If we need to remove what needs to be done? Robots block or Noindex/Nofollow Regards
Technical SEO | | mtthompsons0 -
Multiple urls for posting multiple classified ads
Want to optimize referral traffic while at same time keep search engines happy and the ads posted. Have a client who advertises on several classified ad sites around the globe. Which is better (post Panda), having multiple identical urls using canonicals to redirect juice to original url? For example: www.bluewidgets.com is the original www.bluewidgetsusa.com www.blue-widgets-galore.com Or, should the duplicate pages be directed to original using a 301? Currently using duplicate urls. Am currently not using "nofollow" tags on those pages.
Technical SEO | | AllIsWell0 -
How to find original URLS after Hosting Company added canonical URLs, URL rewrites and duplicate content.
We recently changed hosting companies for our ecommerce website. The hosting company added some functionality such that duplicate content and/or mirrored pages appear in the search engines. To fix this problem, the hosting company created both canonical URLs and URL rewrites. Now, we have page A (which is the original page with all the link juice) and page B (which is the new page with no link juice or SEO value). Both pages have the same content, with different URLs. I understand that a canonical URL is the way to tell the search engines which page is the preferred page in cases of duplicate content and mirrored pages. I also understand that canonical URLs tell the search engine that page B is a copy of page A, but page A is the preferred page to index. The problem we now face is that the hosting company made page A a copy of page B, rather than the other way around. But page A is the original page with the seo value and link juice, while page B is the new page with no value. As a result, the search engines are now prioritizing the newly created page over the original one. I believe the solution is to reverse this and make it so that page B (the new page) is a copy of page A (the original page). Now, I would simply need to put the original URL as the canonical URL for the duplicate pages. The problem is, with all the rewrites and changes in functionality, I no longer know which URLs have the backlinks that are creating this SEO value. I figure if I can find the back links to the original page, then I can find out the original web address of the original pages. My question is, how can I search for back links on the web in such a way that I can figure out the URL that all of these back links are pointing to in order to make that URL the canonical URL for all the new, duplicate pages.
Technical SEO | | CABLES0