Canonical Tag on Blog - Roger says it's incorrect?
-
Hi
I have just released a post on my blog and I wanted to check my primary keyword for the post to make sure the page scores well. However when I did the page report it showed the Canonical Rel tag was incorrect.
example of link
the blog is http://www.example.com/Blog/post-comment/
The Canonical tag is below
What am I doing wrong, as it looks correct to me?
-
Thanks Dr Peter this is all making good sense to me,.
-
In some cases, we return a warning if the canonical doesn't match the display URL. I realize this can be confusing, because often canonicals don't match the page, by necessity. It's essentially just a heads up, in that case, to make sure no one does anything dangerous. There are two canonical messages, though - one is an error or warning, and one is just a notice. I'm not sure which one you're seeing.
As Sean said, though, I'm not seeing any obvious issues with the canonical tag on your blog. This may just be a hyperactive warning on our part.
-
That was the correct one, thanks for looking over it...
-
That was the correct one, thanks for looking over it...
-
Hi
I have checked the canonical link on your blog, on the duplicate content post (I assume this is he one)
It looks like
This looks good to me.
Is it possible the error report was looking at one of the examples in your text, the 5th and 8th use of the word canonical in the article could have confused the checker.
Let me know if I am checking the wrong information or if you would like me to look at anything else
Sean
-
Sorry I was rushing... it looks like the below.
<link href="http://www.example.co.uk/Blog/duplicate-content-seo-basics/" rel="canonical">
-
Sorry I was rushing... it looks like the below.
<link href="http://www.example.co.uk/Blog/duplicate-content-seo-basics/" rel="canonical">
-
Hi
Below is an example of a canonical tag on the seomoz blog, the differences I can see from yours are
rel="canonical" href="http://www.seomoz.org/blog/my-favorite-way-to-get-links-and-social-shares-whiteboard-friday" />
The href= between "canonical" and "http://www."
the trailing / also has a space after the "
I hope this helps
Sean
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
IT's Hurt My Rank?HELP!!!
hi,guys,john here, i just began use the MOZ service several days ago, recently i noticed one thing that one keyword on the first google search result page, but when i done some external links,the rank down from 1 to 8, i think may be the bad quality external links caused the rank down. so my question,should i delete the bad quality links or build more better quality links? which is better for me. easy to delete the bad links and hard to build high quality links. so what's your better opinion,guys? thanks John
Technical SEO | | smokstore0 -
Why are these URL's suddenly appearing in WMT?
One of our clients has suddenly experienced a sudden increase in crawl errors for smart phones overnight for pages which no longer exist and there are no links to these pages according to Google. There is no evidence as to why Google would suddenly start to crawl these pages as they have not existed for over 5 years, but it does come after a new site design has been put live. Pages do not appear to be in the index when a site search is used. There was a similar increase in crawl errors on desktop initially after the new site went live, but these quickly returned to normal. Mobile crawl errors only became apparent after this. There are some URL's showing which have no linking page detected so we don't know where these URL's are being found. WMT states "Googlebot couldn't crawl this URL because it points to a non-existent page". Those that do have a linking page are showing an internal page which also doesn't exist so it can't possibly link to any page. Any insight is appreciated. Andy and Mark at Click Consult.
Technical SEO | | ClickConsult0 -
Best way to implement noindex tags on archived blogs
Hi, I have approximately 100 old blogs that I believe are of interest to web browsers that I'd potentially like to noindex due to the fact that they may be viewed poorly by Google, but I'd like to keep on our website. A lot of the content in the blogs is similar to one another (as we blog about the same topics quite often), which is why I believe it may be in our interests to noindex older blogs that we have newer content for on more recent blogs. Firstly does that sound like a good idea? Secondly, can I use Google Tag Manager to implement noindex tags on specific blog pages? It's a hassle to get the webmaster to add in the code, and I've found no mention of whether you can implement such tags on Tag Manager on the usual SEO blogs. Or is there a better way to implement noindex tags en masse? Thanks!
Technical SEO | | TheCarnage0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Do canonical tags pass all of the link juice onto the URL they point to?
I have an ecommerce website where the category pages have various sorting and paging options which add a suffix to the URLs. My site is setup so the root category URL, domain.com/category-name, has a canonical tag pointing to domain.com/category-name/page1/price however all links, both interner & external, point to the former (i.e. domain.com/category-name). I would like to know whether all of the link juice is being passed onto the canonical tag URL? Otherwise should I change the canonical tag to point the other way? Thanks!
Technical SEO | | tjhossy0 -
Canonical tags pointing at old URLs that have been 301'd
I have a site which has various white label sites with the same content on each. I have canonical tags on the white label sites pointing to the main site. I have changed some URLs on the main site and 301'd the previous URL to the new ones. Is it ok to have the canonicals pointing to the old URLs that now have a 301 redirect on them.
Technical SEO | | BeattieGroup0 -
Does a CMS inhibit a site's crawlability?
I smell baloney but I could use a little backup from the community! My client was recently told by an SEO that search engines have a hard time getting to their site because using a CMS (like WordPress) doesn't allow "direct access to the html". Here is what they emailed my client: "Word Press (like your site is built with) and other similar “do it yourself” web builder programs and websites are not good for search engine optimization since they do not allow direct access to the HTML. Direct HTML access is needed to input important items to enhance your websites search engine visibility, performance and creditability in order to gain higher search engine rankings." Bots are blind to CMSs and html is html, correct? What do you think about the information given by the other SEO?
Technical SEO | | Adpearance0 -
I have a WordPress site with 30 + categories and about 2k tags. I'd like to bring that number down for each taxonomy. What is the proper practice to do that?
I want to bring my categories down to about 8 or so and the tags... They're just a mess and I'd really like to bring that figure down significantly and setup a standard for usage. My thought was to remove the un-needed tags and categories and setup 301 redirects for the ones that I'm removing. Is that even necessary? Are there tools that can assist with this? What are the "gotchas" I should be aware of? Thanks!
Technical SEO | | digisavvy1