Should we use the rel-canonical tag?
-
We have a secure version of our site, as we often gather sensitive business information from our clients.
Our https pages have been indexed as well as our http version.
-
Could it still be a problem to have an http and an https version of our site indexed by Google? Is this seen as being a duplicate site?
-
If so can this be resolved with a rel=canonical tag pointing to the http version?
Thanks
-
-
Agreed - this is generally an issue with relative paths, and job one is to fix it. In most cases, you really don't want these crawled at all. I do think rel=canonical is a good bet here - 301 redirects can get really tricky with http/https, and you can end up creating loops. It can be done right, but it's also easy to screw up, in my experience.
-
-
Yes, having 2 versions of the same content can be seen duplicate content and could cause issues.
-
Yes, include a canonical tag in the header (assuming both http & https pages are close to identical). This will help Google's crawler figure out which version of the page to show in the search results.
-
-
Yes, would suggest canonical as the easiest resolution -
And Irving is right PDF's are most definitely indexed, I am not sure how they are interpreted and if they would specifically count a dup content, but not sure this idea would EVER be something i would suggest as it it seems to have lots of negative repercussions.
I would most definitely agree that relative links is probably your issue, and if you canonical and remove inline relative links and make them http absolute this should resolve itself in a month or so.
-
I disagree
a) pdfs are both indexed AND read by crawlers.
b) even if you don't have navigation to the file sometimes Google can find it if it's in a folder that you are not blocking in robots.txt.
c) if someone links to it once on the web it's getting crawled and indexed.
If you have a https section that content should be behind a login and not accessible to the engines. Your problem sounds like your https pages have relative links on them and Google is crawling the https page and then following the relative links staying on https so you need to fix that and this will fix your site getting http pages indexed as dupe https.
Absolute http canonical tags will help but it not the solution. you need to fix the https leaking on your secure pages.
.
-
You can "no-index" them within the html - but if you really want a fun trick - when and if you are not able to get around mass amount of duped content and it isn't for the sake of rankings - example, MLS listings, etc
Change the content into a pdf - or file format - thus not being able to be crawled.
Once again - it will NOT be crawled - so don't go doing this to an entire site
But maybe your clients confidential data - can be submitted this way - and it will not get indexed - except for the subpage - but then you can no index that subpage.
Hope this helps.
Your pal
Chenzo
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Canonical Tags increased after putting the appropriate tag?
Hey, I noticed that the number of duplicate title tags increased from 14k to 30k in Google Search Console. These dup title tags derived from having the incorrect canonical tags. For instance, http://www.site.com/product-name/product-code/?d=Mens
Intermediate & Advanced SEO | | ggpaul562
http://www.site.com/product-name/product-code/?d=Womens These two are the same exact pages with two parameters (These are not unisex by the way). Anyway, when I viewed the page source, it had the parameter in the canonical tag so.... it would look like this So whether it be http://www.site.com/product-name/product-code/
http://www.site.com/product-name/product-code/?d=Mens
http://www.site.com/product-name/product-code/?d=Womens The canonical tag had the "?d=Womens" I figured that wasn't best practices, so for the canonical tag I removed the parameter so now the canonical tag is http://www.site.com/product-name/product-code/ for that specific page with parameter (if that makes sense). My question is, why did my number of errors doubled after what I thought fixed the solution?0 -
Increase in duplicate page titles due to canonical tag issue
Implemented canonical tag (months back) in product pages to avoid duplicate content issue. But Google picks up the URL variations and increases duplicate page title errors in Search Console. Original URL: www.example.com/first-product-name-123456 Canonical tag: Variation 1: www.example.com/first-product--name-123456 Canonical tag: Variation 2: www.example.com/first-product-name-sync-123456 Canonical tag: Kindly advice the right solution to fix the issue.
Intermediate & Advanced SEO | | SDdigital0 -
Canonicle & rel=NOINDEX used on the same page?
I have a real estate company: www.company.com with approximately 400 agents. When an agent gets hired we allow them to pick a URL which we then register and manage. For example: www.AGENT1.com We then take this agent domain and 301 redirect it to a subdomain of our main site. For example
Intermediate & Advanced SEO | | EasyStreet
Agent1.com 301’s to agent1.company.com We have each page on the agent subdomain canonicled back to the corresponding page on www.company.com
For example: agent1.company.com canonicles to www.company.com What happened is that google indexed many URLS on the subdomains, and it seemed like Google ignored the canonical in many cases. Although these URLS were being crawled and indexed by google, I never noticed any of them rank in the results. My theory is that Google crawled the subdomain first, indexed the page, and then later Google crawled the main URL. At that point in time, the two pages actually looked quite different from one another so Google did not recognize/honor the canonical. For example:
Agent1.company.com/category1 gets crawled on day 1
Company.com/category1 gets crawled 5 days later The content (recently listed properties for sale) on these category pages changes every day. If Google crawled the pages (both the subdomain and the main domain) on the same day, the content on the subdomain and the main domain would look identical. If the urls are crawled on different days, the content will not match. We had some major issues (duplicate content and site speed) on our www.company.com site that needed immediate attention. We knew we had an issue with the agent subdomains and decided to block the crawling of the subdomains in the robot.txt file until we got the main site “fixed”. We have seen a small decrease in organic traffic from google to our main site since blocking the crawling of the subdomains. Whereas with Bing our traffic has dropped almost 80%. After a couple months, we have now got our main site mostly “fixed” and I want to figure out how to handle the subdomains in order to regain the lost organic traffic. My theory is that these subdomains have a some link juice that is basically being wasted with the implementation of the robots.txt file on the subdomains. Here is my question
If we put a ROBOTS rel=NOINDEX on all pages of the subdomains and leave the canonical (to the corresponding page of the company site) in place on each of those pages, will link juice flow to the canonical version? Basically I want the link juice from the subdomains to pass to our main site but do not want the pages to be competing for a spot in the search results with our main site. Another thought I had was to place the NOIndex tag only on the category pages (the ones that seem to change every day) and leave it off the product (property detail pages, pages that rarely ever change). Thank you in advance for any insight.0 -
When is it recommended to use a self referencing rel "canonical"?
In what type of a situation is it the best type of practice to use a self referencing rel "canonical" tag? Are there particular practices to be cautious of when using a self referencing rel "canonical" tag? I see this practice used mainly with larger websites but I can't find any information that really explains when is a good time to make use of this practice for SEO purposes. Appreciate all feedback. Thank you in advance.
Intermediate & Advanced SEO | | SEO_Promenade0 -
Confusion about forums and canonical links
Like many people, I get a lot of alerts about duplicate content, etc. I also don't know if I am hurting my domain authority because of the forum. It is a pretty active forum, so it is important to the site. So my question is, right now there could be 50 pages like this <domain>/forum/index.php/topic/6043-new-modular-parisian-restaurant-10243-is-here/
Intermediate & Advanced SEO | | BrickPicker
<domain>/forum/index.php/topic/6043-new-modular-parisian-restaurant-10243-is-here/page-1
<domain>/forum/index.php/topic/6043-new-modular-parisian-restaurant-10243-is-here/page-2
<domain>/forum/index.php/topic/6043-new-modular-parisian-restaurant-10243-is-here/page-3
all the way to:
<domain>/forum/index.php/topic/6043-new-modular-parisian-restaurant-10243-is-here/page-50</domain></domain></domain></domain></domain> So right now the rel canonical links are set up just like above, including the page numbers. I am not sure if that is the best way or not. I really thought that all the of links for that topic should be
<domain>/forum/index.php/topic/6043-new-modular-parisian-restaurant-10243-is-here/ that way it would passing "juice" to the main topic/link. </domain> I do have other links setup for:
link rel='next',link rel='up',link rel='last' Overall is this correct, or is there a better way to do it?0 -
Use of Rel=Canonical
I have been pondering whether I am using this tag correctly or not. We have a custom solution which lays out products in the typical eCommerce style with plenty of tick box filters to further narrow down the view. When I last researched this it seemed like a good idea to implement rel=canonical to point all sub section pages at a 'view-all' page which returns all the products unfiltered for that given section. Normally pages are restricted down to 9 results per page with interface options to increase that. This combined with all the filters we offer creates many millions of possible page permutations and hence the need for the Canonical tag. I am concerned because our view-all pages get large, returning all of that section's product into one place.If I pointed the view-all page at say the first page of x results would that defeat the object of the view-all suggestion that Google made a few years back as it would require further crawling to get at all the data? Alternatively as these pages are just product listings, would NoIndex be a better route to go given that its unlikely they will get much love in Google anyway?
Intermediate & Advanced SEO | | motiv80 -
Quickseoresults.com - Anyone used them?
Has anyone had any experience with or used quickseoresults.com? I'm just looking into them now. They seem to offer a 30 day free trial based on 'white hat' tactics that gives results. You can then pay to continue their services. They seem to base their services heavily around link building, so I'm dubious.
Intermediate & Advanced SEO | | PeterAlexLeigh0 -
Should I check Use noindex for Tag Archives?
I have a page indexed > (http://mysite.com/mypost) and also http://mysite.com/tag/mypost The same post shows up twice, one with /tag/ one without when I search site:http://mysite.com
Intermediate & Advanced SEO | | vinner-280241
Is this a duplicate content?? Can I get penalized for this? In the All in one plugin should I check Use noindex for Tag Archives to avoid this or doesn't matter.
Thanks0