Non-Canonical Pages still Indexed. Is this normal?
-
I have a website that contains some products and the old structure of the URL's was definitely not optimal for SEO purposes. So I created new SEO friendly URL's on my site and decided that I would use the canonical tags to transfer all the weight of the old URL's to the New URL's and ensure that the old ones would not show up in the SERP's. Problem is this has not quite worked. I implemented the canonical tags about a month ago but I am still seeing the old URL's indexed in Google and I am noticing that the cache date of these pages was only about a week ago.
This leads me to believe that the spiders have been to the pages and seen the new canonical tags but are not following them. Is this normal behavior and if so, can somebody explain to me why?
I know I could have just 301 redirected these old URL's to the new ones but the process I would need to go through to have that done is much more of a battle than to just add the canonical tags and I felt that the canonical tags would have done the job. Needless to say the client is not too happy right now and insists that I should have just used the 301's. In this case the client appears to be correct but I do not quite understand why my canonical tags did not work.
Examples Below-
Old Pages:
www.awebsite.com/something/something/productid.3254235
New Pages:
www.awebsite.com/something/something/keyword-rich-product-name
Canonical tag on both pages:
rel="canonical" href="http://www.awebsite.com/something/something/keyword-rich-product-name"/> Thanks guys for the help on this.
-
It can take a while. I disagree very slightly with Alan and EGOL on one point - while 301s are traditionally more appropriate here, I often find that canonicals are pretty strong (and more than a hint). Both suffer the same problem, though - the signal has to be crawled and processed, and that doesn't always take right away. I haven't seen any reports on it taking 2, 3, etc. times to happen, but I've definitely seen a page re-cache without the indexation signals beign honored.
Are these true duplicates or did something change in the interim a bit? If the duplicates don't seem like true duplicates or you put 1000s of them out there all at once, Google could choose to ignore the canonicals.
If these really seem stuck, though, switching to 301s is harmless, and for a permanent URL change, it is probably the better way to go. I wouldn't expect that to kick in instantly either, though.
-
Yes... I agree with Alan. Canonical is a hint.
We put rel=canonical on about 250 pages in early February. As of today about 1/2 of those pages are still in the SERPs. The numbers are falling but this is really really slow to implement.
If you have done everything correctly it will probably work but requires patience.
-
Alan, I appreciate the help. I will go with this and see what happens and try to find those videos. Graci.
-
Matt cutts has said it a few times in videos, i could not tell you what ones without doing a far bit of searching.
-
Yes they should, but 301's and canonicals leak link juice, so you want your links to go directly to the correct page where you can.
See half way down this page, you will see just how easy it is to do all this, with a few clicks.
http://www.seomoz.org/ugc/microsoft-technologies-and-seo-web-development
for you it may not be quiest as easy as you are converting from id to product name, but if you look into the url rewrite module a bit further you will see it is posible to do this once for all pages
-
Also do you know of any documentation that states that it takes a few passes for a canonical tag to be honored and also for 301's as well? That would really help me explain my initial thoughts on using the canonical tag.
-
I get the part about the 301's and I believe we have iis7 but between departments, just not as simple of a change especially regarding the number of products I have to do this for, 800+.
Regarding the links to the old URL, it was my belief that with the canonical tag, that weight should transfer over to the the new URL as well or was I mistaken on that?
-
You seem to have done everything ok, but from my understanding google does not honer 301's or caninicals first crawl, they wait a few times to make sure its not a mistake.
What sort of server are you using? if you are using windows with iis7 is is very easy to impliment the urlrewites and corasponding 301's
i would 301, a canonical is a hint, a301 is a directive. and also if people stil go to your old pages, they may make a link to the old page rather then the new url.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Skip indexing the search pages
Hi, I want all such search pages skipped from indexing www.somesite.com/search/node/ So i have this in robots.txt (Disallow: /search/) Now any posts that start with search are being blocked and in Google i see this message A description for this result is not available because of this site's robots.txt – learn more. How can i handle this and also how can i find all URL's that Google is blocking from showing Thanks
Technical SEO | | mtthompsons0 -
Page that appears on SERPs is not the page that has been optimized for users
This may seem like a pretty newbie question, but I haven't been able to find any answers to it (I may not be looking correctly). My site used to rank decently for the KW "Gold name necklace" with this page in the search results:http://www.mynamenecklace.co.uk/Products.aspx?p=302This was the page that I was working on optimizing for user experience (load time, image quality, ease of use, etc.) since this page was were users were getting to via search. A couple months ago the Google SERP's started showing this page for the same query (also ranked a little lower, but not important for this specific question):http://www.mynamenecklace.co.uk/Products.aspx?p=314Which is a white gold version of the necklaces. This is not what most users have in mind (when searching for gold name necklace) so it's much less effective and engaging.How do I tell Google to go back to old page/ give preference to older page / tell them that we have a better version of the page / etc. without having to noindex any of the content? Both of these pages have value and are for different queries, so I can't canonical them to a single page. As far as external links go, more links are pointing to the Yellow gold version and not the white gold one.Any ideas on how to remedy this?Thanks.
Technical SEO | | Don340 -
My number of duplicate page title and temporary redirect warnings increased after I enabled Canonical urls. Why? Is this normal?
After receiving my first SEO moz report, I had some duplicate page titles and temporary redirects. I was told enabling Canonical urls would take of this. I enabled the Canonical URLs, but the next report showed that both of those problems had increased three fold after enabled the canonical urls! What happened?
Technical SEO | | btsseo780 -
Changed URL of all web pages to a new updated one - Keywords still pick the old URL
A month ago we updated our website and with that we created new URLs for each page. Under "On-Page", the keywords we put to check ranking on are still giving information on the old urls of our websites. Slowly, some new URLs are popping up. I'm wondering if there's a way I can manually make the keywords feedback information from the new urls.
Technical SEO | | Champions0 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0 -
Canonical - how can you tell if page is appearing duplicate in Google?
Our home page file is www.ides.com/default.asp and appears in Google as www.ides.com. Would it be a good thing for us to include the following tag in the head section of our website homepage?
Technical SEO | | Prospector-Plastics0 -
Canonical tag used on several pages?
Is it a bad idea to use rel=canonical from several pages back to one (if you are planning on no-indexing them)? Does this concentrate the “link juice” from those several pages back to one?
Technical SEO | | nicole.healthline0 -
Google indexing page with description
Hello, We rank fairly high for a lot of terms but Google is not indexing our descriptions properly. An example is with "arnold schwarzenegger net worth". http://www.google.ca/search?q=arnold+schwarzenegger+net+worth&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a When we add content, we throw up a placeholder page first. The content gets added with no body content and the page only contains the net worth amount of the celebrity. We then go back through and re-add the descriptions and profile bio shortly after. Will that affect how the pages are getting indexed and is there a way we can get Google to go back to the page and try to index the description so it doesn't just appear as a straight link? Thanks, Alex
Technical SEO | | Anti-Alex0