Is the full URL necessary for successful Canonical Links?
-
Hi, my first question and hopefully an easy enough one to answer.
Currently in the head element of our pages we have canonical references such as:
(Yes, untidy URL...we are working on it!)
I am just trying to find out whether this snippet of the full URL is adequete for canonicalization or if the full domain is needed aswell.
My reason for asking is that the SEOmoz On-Page Optimization grading tool is 'failing' all our pages on the "Appropriate Use of Rel Canonical" value.
I have been unable to find a definitive answer on this, although admittedly most examples do use the full URL. (I am not the site developer so cannot simply change this myself, but rather have to advise him in a weekly meeting).
So in short, presumably using the full URL is best practise, but is it essential to its effectiveness when being read by the search engines? Or could there be another reason why the "Appropriate Use of Rel Canonical" value is not being green ticked?
Thank you very much, I appreciate any advice you can give.
-
Thanks I will get the full URLs implemented to avoid any future confusions.
I can't give an exact size of the site but I know it is much larger than it should be. It seems as though our CMS has been unnecessarily producing new URLs for the same pages over and over which we are aiming to fix very soon.
-
Thank you for your fast responses!
Sorry Damien, I am at odds as to how I missed this bit of information!
In light of this, do you have any clues as to why SEOmoz on page diagnostics does not like our canonical references?
-
Thank you for your fast responses!
Sorry Damien, I am at odds as to how I missed this bit of information!
In light of this, do you have any clues as to why SEOmoz on page diagnostics does not like our canonical references?
-
Interesting that Google mentions absolute and relative urls, but they don't specifically address root relative urls (what this is, since it begins with the "/") or show it in their examples.
-
http://www.google.com/support/webmasters/bin/answer.py?answer=139394
Part of relevance -
Can the link be relative or absolute?
The rel="canonical" attribute can be used with relative or absolute links, but we recommend using absolute links to minimize potential confusion or difficulties. If your document specifies a base link, any relative links will be relative to that base link.
-
If you check half way down the page it answers exactly what you're after.
http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html
In short, it's fine
DD
..if you can't be bothered finding it:
"Can I use a relative path to specify the canonical, such as ?
Yes, relative paths are recognized as expected with the tag. Also, if you include a <base> link in your document, relative paths will resolve according to the base URL."
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Use existing page with bad URL or brand new URL?
Hello, We will be updating an existing page with more helpful information with the goal of reaching more potential customers through SEO and also attaching a SEM campaign to the specific landing page. The current URL of the page scores 25 on Page Authority, and has 2 links to it from blog articles (PA 35, 31). The current content needs to be rewritten to be more helpful and also needs some additional information. The downsides are that it has an "bad" URL- no target keyword and uses underscores. Which of the following choices would you make? 1. Update this old "bad" URL with new content. Benefit from the existing PA. -or- 2. Start with a new optimized URL, reusing some of the old content and utilizing a 301 redirect from the previous page? Thank you!
Technical SEO | | XLMarketing0 -
50 Duplicate URLS, but not the same
Hi According to my latest site crawl, many of my pages are showing up to 50 duplicate urls. However this isn't the case in real life. http://www.fortusgroup.com.au/browse-products/rubber-tracks/excavator-rubber-tracks/hitachi/ex-33mu.html is showing 31 duplicate URL. Examples include: http://www.fortusgroup.com.au/browse-products/rubber-tracks/excavator-rubber-tracks/parts/x430.html
Technical SEO | | JDadd
http://www.fortusgroup.com.au/browse-products/rubber-tracks/excavator-rubber-tracks/case/cx-75sr.html Obviously these URL's are very similar and I know that Moz judges URLs by 90% of their similarity, but is this affecting my actual raking on google? If so, what can I do? This pages are also very similar in code and content, so they are also showing as duplicate content etc as well. Worried that this is having an affect on my SERP rankings, as this pages arent ranking particularly well. Thanks, Ellie0 -
URL Parameters
On our webshop we've added some URL-parameters. We've set URL's like min_price, filter_cat, filter_color etc. on "don't Crawl" in our Google Search console. We see that some parameters have 100.000+ URL's and some have 10.000+ Is it better to add these parameters in the robots.txt file? And if that's better, how can we write it down so the URL's will not be crawled. Our robotos.txt files shows now: # Added by SEO Ultimate's Link Mask Generator module User-agent: * Disallow: /go/ # End Link Mask Generator output User-agent: * Disallow: /wp-admin/
Technical SEO | | Happy-SEO1 -
Updating inbound links vs. 301 redirecting the page they link to
Hi everyone, I'm preparing myself for a website redesign and finding conflicting information about inbound links and 301 redirects. If I have a URL (we'll say website.com/website) that is linked to by outside sources, should I get those outside sources to update their links when I change the URL to website.com/webpage? Or is it just as effective from a link juice perspective to simply 301 redirect the old page to the new page? Are there any other implications to this choice that I may want to consider? Thanks!
Technical SEO | | Liggins0 -
If Google's index contains multiple URLs for my homepage, does that mean the canonical tag is not working?
I have a site which is using canonical tags on all pages, however not all duplicate versions of the homepage are 301'd due to a limitation in the hosting platform. So some site visitors get www.example.com/default.aspx while others just get www.example.com. I can see the correct canonical tag on the source code of both versions of this homepage, but when I search Google for the specific URL "www.example.com/default.aspx" I see that they've indexed that specific URL as well as the "clean" one. Is this a concern... shouldn't Google only show me the clean URL?
Technical SEO | | JMagary0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Problem with Rel Canonical
Background: We check to make sure that IF you use canonical URL tags, it points to the right page. If the canonical tag points to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. If you've not made this page the rel=canonical target, change the reference to this URL. NOTE: For pages not employing canonical URL tags, this factor does not apply. Clearly I am doing something wrong here, how do I check my various pages to see where the problem lies and how do I go about fixing it?
Technical SEO | | SallySerfas0 -
How is link juice passed to links that appear more than once on a given page?
For the sake of simplicity, let's say Page X has 100 links on it, and it has 100 points of link juice. Each page being linked to would essentially get 1 point of link juice. Right? Now let's say Page X links to Page Y 3 times and Page Z 5 times, and every other link only once. Does this mean that Page Y would get 3 "link juice points" and Page Z would get 5? Note: I know that the situation is much more complex than this, such as the devaluation of footer links, etc, etc, etc. However, I am interested to hear peoples take on the above scenario, assuming all else is equal.
Technical SEO | | bheard0