What is the difference between rel canonical and 301's?
-
Hi Guys
I have been told a few times to add the rel canonical tag to my category pages - however every category page actually is different from the other - besides the listings that I have for my staff on each pages. Some of them specialise in areas that cross over in other areas - but over really if I'm re directing for eg: Psychic Readings over to Love and Relationships because 5 of my staff members are in both categories - the actual delivering of content and in depth of the actual category which skills are provided at different levels don't justify me creating a rel tag from Psychic Readings over to Love and Relationships just because i have 5 staff members listed under both categories.
Tell me have I got this right or completely wrong?
Here is an eg: Psychic Readings category https://www.zenory.com/psychic-readings
And love and relationships category - https://www.zenory.com/love-relationships
Hope this makes sense - I really look forward to your guys feedback!
Cheers
-
Understand what you mean - to be very honest I don't think that this content snippet is generating duplicate content.
However, I don't really understand the mechanism:
https://www.zenory.com/horoscopes/taurus/day -> I would expect to find the daily horoscope for Taurus - when I click on Capricorn I would expect to go to https://www.zenory.com/horoscopes/capricorn/day - however I remain on the same page & the horoscope is shown in a lightbox. I would rather put it on a separate page (if all horoscopes of all signs are present in the HTML of one sign these pages become quite similar when you look at the source code.
Sounds a bit confusing, but I hope you get what I mean.rgds,
Dirk
-
Hi Dirk
I wanted to ask you another question with regard to this.
I have horoscope pages that have just been published today.
We offer daily horoscope for each star sign (12) these are unique and different each day for each star sign, however there is a weekend love section at the bottom of each page for each star sign that is the same for the whole week.
https://www.zenory.com/horoscopes/taurus/day
https://www.zenory.com/horoscopes/aries/day
Above will show you an example of a couple of the daily horoscopes, you can see the weekend love is different - however it will be the same for the same star sign tomorrow - you can't see these as we have only published and released these today. So you will be able to tell the difference when tomorrows one is published, but hopefully I have explained myself well here.
So my question will be - half the content on a single page will be duplicate content: Besides the new daily horoscope entry. I'm wondering if I need to add canonical tags or if I should create a separate page for the weekend love horoscope of each star sign.
I hope this makes sense!
Thanks again Dirk!
-
That answers my question Dirk, thank you again!!!
-
For the examples you gave I would certainly not use a 301 or use a canonical tag. The content is unique - and only a relatively small part is common (the list)
To explain the difference:
A canonical tag is used if you have pages that are identical (or almost identical) and which are accessible under different url's. A good example is an e-commerce site with a list of articles like mysite.com/umbrellas - if by sorting the products the url is changing like mysite.com/umbrellas&sort=high it's best to put a canonical so that google will not index all the variations. If you use a canonical on the second url -pointing to the first. A visitor can however still access the pages. Google bot normally respects the canonical - but is not obliged to do so.
A 301 is different - in fact you give the message to the browser: this page is no longer available on this location but has moved to a new location. It's no longer possible to visit the original page (not for humans & not for bots). Google bot has to respect this directive.
A last option you can use is the "noindex/follow". This you normally use for pages that have very little value for search engines, but where you still would like the bots to follow and index the pages which are listed. This you can use for pages of type blog.com/tag/subject - that are generating lists with all the articles marked with subject. In general pages like this are good for cross linking, however have low value for search engines so it's better to not have them indexed.
Hope this clarifies,
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Backlinks from customers' websites. Good or bad? Violation?
Hi all, Let's say a company holds 100 customers and somehow getting a backlink from all of their websites. Usually we see "powered by xyz", etc. Is something wrong with this? Is this right backlinks strategy? Or violation of Google guidelines? Generally most of the customers's websites do not have good DA; will it beneficial getting a backlinks from such average below DA websites? Thanks
White Hat / Black Hat SEO | | vtmoz0 -
Duplicate content warning: Same page but different urls???
Hi guys i have a friend of mine who has a site i noticed once tested with moz that there are 80 duplicate content warnings, for instance Page 1 is http://yourdigitalfile.com/signing-documents.html the warning page is http://www.yourdigitalfile.com/signing-documents.html another example Page 1 http://www.yourdigitalfile.com/ same second page http://yourdigitalfile.com i noticed that the whole website is like the nealry every page has another version in a different url?, any ideas why they dev would do this, also the pages that have received the warnings are not redirected to the newer pages you can go to either one??? thanks very much
White Hat / Black Hat SEO | | ydf0 -
'SEO Footers'
We have an internal debate going on right now about the use of a link list of SEO pages in the footer. My stance is that they serve no purpose to people (heatmaps consistently show near zero activity), therefore they shouldn't be used. I believe that if something on a website is user-facing, then it should also beneficial to a user - not solely there for bots. There are much better ways to get bots to those pages, and for those people who didn't enter through an SEO page, internal linking where appropriate will be much more effective at getting them there. However, I have some opposition to this theory and wanted to get some community feedback on the topic. Anyone have thoughts, experience, or data to share on this subject?
White Hat / Black Hat SEO | | LoganRay1 -
HELP!! We are losing search visibility fast and I don't know why?
We have recently moved from http to https - could this be a problem? https://www.thepresentfinder.co.uk As far as I'm aware we are doing everything by SEO best practice and have no manual penalties, all content is unique and we are not doing any link farming etc...
White Hat / Black Hat SEO | | The-Present-Finder0 -
G.A. question - removing a specific page's data from total site's results?
I hope I can explain this clearly, hang in there! One of the clients of the law firm I work for does some SEO work for the firm and one thing he has been doing is googling a certain keyword over and over again to trick google's auto fill into using that keyword. When he runs his program he generates around 500 hits to one of our attorney's bio pages. This happens once or twice a week, and since I don't consider them real organic traffic it has been really messing up my GA reports. Is there a way to block that landing page from my overall reports? Or is there a better way to deal with the skewed data? Any help or advice is appreciated, I am still so new to SEO I feel like a lot of my questions are obvious, but please go easy on me!
White Hat / Black Hat SEO | | MyOwnSEO0 -
Pagination for Search Results Pages: Noindex/Follow, Rel=Canonical, Ajax Best Option?
I have a site with paginated search result pages. What I've done is noindex/follow them and I've placed the rel=canonical tag on page2, page3, page4, etc pointing back to the main/first search result page. These paginated search result pages aren't visible to the user (since I'm not technically selling products, just providing different images to the user), and I've added a text link on the bottom of the first/main search result page that says "click here to load more" and once clicked, it automatically lists more images on the page (ajax). Is this a proper strategy? Also, for a site that does sell products, would simply noindexing/following the search results/paginated pages and placing the canonical tag on the paginated pages pointing back to the main search result page suffice? I would love feedback on if this is a proper method/strategy to keep Google happy. Side question - When the robots go through a page that is noindexed/followed, are they taking into consideration the text on those pages, page titles, meta tags, etc, or are they only worrying about the actual links within that page and passing link juice through them all?
White Hat / Black Hat SEO | | WebServiceConsulting.com0 -
Use of 301 redirects
Scenario Dynamic page produces great results for the user but produces a long very un-user and un-search friendly URL http://www.OURSITE.co.uk/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=loving&x=0&y=0#/ref=nb_sb_noss_1?url=search-alias%3Daps&field-keywords=lovingthebead&rh=i%3Aaps%2Ck%3Alovingthebead Solution 301 redirect in .htaccess Fantastic - works a treat BUT after redirect the original long ugly old URL appears in the location field Would really like this showing the new short user friendly short URL What am I doing wrong? Thank you all. CB
White Hat / Black Hat SEO | | GeezerG0