Is a Rel="cacnonical" page bad for a google xml sitemap
-
Back in March 2011 this conversation happened.
Rand: You don't want rel=canonicals.
Duane: Only end state URL. That's the only thing I want in a sitemap.xml. We have a very tight threshold on how clean your sitemap needs to be. When people are learning about how to build sitemaps, it's really critical that they understand that this isn't something that you do once and forget about. This is an ongoing maintenance item, and it has a big impact on how Bing views your website. What we want is end state URLs and we want hyper-clean. We want only a couple of percentage points of error.
Is this the same with Google?
-
LOL thanks!
-
You're very welcome.
And just try to think about it this way... every best practice you employ for your site is another best practice your competitors have to employ to keep up with you
-
Yes I understand that. It is just a lot more work for us to do with our site map! Thanks for your advice.
-
To clarify, when I say rel="canonical" pages, I mean pages that are using that link tag to point to another page (i.e., the pages that are NOT the canonical page). These are also the pages that Duane and Rand were talking about.
I am not saying you shouldn't include pages that are included in the actual link tag.
Let's assume you have 3 pages: A, B, and C.
Pages B and C have a rel="canonical" link that points to A.
In this scenario, you would include A in your XML Sitemap (assuming A is a high-quality page that is important to your site), and you would NOT include B and C.
-
I see. but the rel="canonical" pages are good page. I get the broken links and all that part but I guess i do not agree with rel="canonical" as much. I can see their standpoint. Do you do a lot with your site map and assign the different values to different pages?
-
Yes, it is safe to assume that all search engines want your XML Sitemaps to be as clean and accurate as possible.
XML Sitemaps give you an opportunity to tell search engines about your most important pages, and you want to take advantage of this opportunity.
Think about it another way. Let's pretend your site and Google are both real people. In that hypothetical world, Google's first impression of your site is established through your site's XML Sitemaps. If those Sitemaps are full of broken links, redirecting URLs, and rel="canonical" pages, your site has already made a bad first impression ("If this site can't maintain an up-to-date Sitemap, I'm terrified of what I'll find once I get to the actual pages").
On the other hand, if your XML Sitemaps are full of live links that point to your site's most important pages, Google will have a positive first impression and continue on with the relationship
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can Google Crawl This Page?
I'm going to have to post the page in question which i'd rather not do but I have permission from the client to do so. Question: A recruitment client of mine had their website build on a proprietary platform by a so-called recruitment specialist agency. Unfortunately the site is not performing well in the organic listings. I believe the culprit is this page and others like it: http://www.prospect-health.com/Jobs/?st=0&o3=973&s=1&o4=1215&sortdir=desc&displayinstance=Advanced Search_Site1&pagesize=50000&page=1&o1=255&sortby=CreationDate&o2=260&ij=0 Basically as soon as you deviate from the top level pages you land on pages that have database-query URLs like this one. My take on it is that Google cannot crawl these pages and is therefore having trouble picking up all of the job listings. I have taken some measures to combat this and obviously we have an xml sitemap in place but it seems the pages that Google finds via the XML feed are not performing because there is no obvious flow of 'link juice' to them. There are a number of latest jobs listed on top level pages like this one: http://www.prospect-health.com/optometry-jobs and when they are picked up they perform Ok in the SERPs, which is the biggest clue to the problem outlined above. The agency in question have an SEO department who dispute the problem and their proposed solution is to create more content and build more links (genius!). Just looking for some clarification from you guys if you don't mind?
Technical SEO | | shr1090 -
"non-WWW" vs "WWW" in Google SERPS and Lost Back Link Connection
A Screaming Frog report indicates that Google is indexing a client's site for both: www and non-www URLs. To me this means that Google is seeing both URLs as different even though the page content is identical. The client has not set up a preferred URL in GWMTs. Google says to do a 301 redirect from the non-preferred domain to the preferred version but I believe there is a way to do this in HTTP Access and an easier solution than canonical.
Technical SEO | | RosemaryB
https://support.google.com/webmasters/answer/44231?hl=en GWMTs also shows that over the past few months this client has lost more than half of their backlinks. (But there are no penalties and the client swears they haven't done anything to be blacklisted in this regard. I'm curious as to whether Google figured out that the entire site was in their index under both "www" and "non-www" and therefore discounted half of the links. Has anyone seen evidence of Google discounting links (both external and internal) due to duplicate content? Thanks for your feedback. Rosemary0 -
Wrong page ranked in Google, specific example
Hi All, I've searched for previous questions and many talk about the same problem but do not post an actual example. I am also thinking to do a blog post and a series of experiments once there is a theory. My target keyword is "Exhibition Stand Hire" and this is the target page on our site http://goo.gl/qt54lb Site appears on page 6 of SERPS (google.co.uk), but instead of this page a homepage is listed. But if I'm searching for the term using quotes, ie "Exhibition Stand Hire" the right page appears on page 4 of the SERPs. Our home page only uses the keyword in the body text, while target page is very optimised. Could it be over-optimised? I've tried mixing up words in the title tag to not offer an exact match, also i've varied the anchor text of all incoming links but that didn't fix the problem. (Hence why at the moment they all use different terms to point to this page) None of this helped alter what page is chosen to appear. Is it simply the matter of page not being strong enough compared to other less relevant pages on the site? How come many other sites rank better with much less effort? (i'm using OSE to determine competition) Thank you.
Technical SEO | | georgexx0 -
Is There Google 6th page penalty?
My site have keyword domain but my page doesnt up or down at 6th page on search results. And my main page doesnt show 6th page too my alt pages. So what can i do for this penaly? Thanks for your help
Technical SEO | | iddaasonuclari0 -
Removing Redirected URLs from XML Sitemap
If I'm updating a URL and 301 redirecting the old URL to the new URL, Google recommends I remove the old URL from our XML sitemap and add the new URL. That makes sense. However, can anyone speak to how Google transfers the ranking value (link value) from the old URL to the new URL? My suspicion is this happens outside the sitemap. If Google already has the old URL indexed, the next time it crawls that URL, Googlebot discovers the 301 redirect and that starts the process of URL value transfer. I guess my question revolves around whether removing the old URL (or the timing of the removal) from the sitemap can impact Googlebot's transfer of the old URL value to the new URL.
Technical SEO | | RyanOD0 -
Will I still get Duplicate Meta Data Errors with the correct use of the rel="next" and rel="prev" tags?
Hi Guys, One of our sites has an extensive number category page lsitings, so we implemented the rel="next" and rel="prev" tags for these pages (as suggested by Google below), However, we still see duplicate meta data errors in SEOMoz crawl reports and also in Google webmaster tools. Does the SEOMoz crawl tool test for the correct use of rel="next" and "prev" tags and not list meta data errors, if the tags are correctly implemented? Or, is it necessary to still use unique meta titles and meta descriptions on every page, even though we are using the rel="next" and "prev" tags, as recommended by Google? Thanks, George Implementing rel=”next” and rel=”prev” If you prefer option 3 (above) for your site, let’s get started! Let’s say you have content paginated into the URLs: http://www.example.com/article?story=abc&page=1
Technical SEO | | gkgrant
http://www.example.com/article?story=abc&page=2
http://www.example.com/article?story=abc&page=3
http://www.example.com/article?story=abc&page=4 On the first page, http://www.example.com/article?story=abc&page=1, you’d include in the section: On the second page, http://www.example.com/article?story=abc&page=2: On the third page, http://www.example.com/article?story=abc&page=3: And on the last page, http://www.example.com/article?story=abc&page=4: A few points to mention: The first page only contains rel=”next” and no rel=”prev” markup. Pages two to the second-to-last page should be doubly-linked with both rel=”next” and rel=”prev” markup. The last page only contains markup for rel=”prev”, not rel=”next”. rel=”next” and rel=”prev” values can be either relative or absolute URLs (as allowed by the tag). And, if you include a <base> link in your document, relative paths will resolve according to the base URL. rel=”next” and rel=”prev” only need to be declared within the section, not within the document . We allow rel=”previous” as a syntactic variant of rel=”prev” links. rel="next" and rel="previous" on the one hand and rel="canonical" on the other constitute independent concepts. Both declarations can be included in the same page. For example, http://www.example.com/article?story=abc&page=2&sessionid=123 may contain: rel=”prev” and rel=”next” act as hints to Google, not absolute directives. When implemented incorrectly, such as omitting an expected rel="prev" or rel="next" designation in the series, we'll continue to index the page(s), and rely on our own heuristics to understand your content.0 -
Google +1 not recognizing rel-canonical
So I have a few pages with the same content just with a different URL. http://nadelectronics.com/products/made-for-ipod/VISO-1-iPod-Music-System http://nadelectronics.com/products/speakers/VISO-1-iPod-Music-System http://nadelectronics.com/products/digital-music/VISO-1-iPod-Music-System All pages rel-canonical to:
Technical SEO | | kevin4803
http://nadelectronics.com/products/made-for-ipod/VISO-1-iPod-Music-System My question is... why can't google + (or facebook and twitter for that matter) consolidate all these pages +1. So if the first two had 5 +1 and the rel-canonical page had 5 +1's. It would be nice for all pages to display 15 +1's not 5 on each. It's my understanding that Google +1 will gives the juice to the correct page. So why not display all the +1's at the same time. Hope that makes sense.0 -
Ror.xml vs sitemap.xml
Hey Mozzers, So I've been reading somethings lately and some are saying that the top search engines do not use ror.xml sitemap but focus just on the sitemap.xml. Is that true? Do you use ror? if so, for what purpose, products, "special articles", other uses? Can sitemap be sufficient for all of those? Thank you, Vadim
Technical SEO | | vijayvasu0