undefined
Skip to content
Moz logo Menu open Menu close
  • Products
    • Moz Pro
    • Moz Pro Home
    • Moz Local
    • Moz Local Home
    • STAT
    • Moz API
    • Moz API Home
    • Compare SEO Products
    • Moz Data
  • Free SEO Tools
    • Domain Analysis
    • Keyword Explorer
    • Link Explorer
    • Competitive Research
    • MozBar
    • More Free SEO Tools
  • Learn SEO
    • Beginner's Guide to SEO
    • SEO Learning Center
    • Moz Academy
    • SEO Q&A
    • Webinars, Whitepapers, & Guides
  • Blog
  • Why Moz
    • Agency Solutions
    • Enterprise Solutions
    • Small Business Solutions
    • Case Studies
    • The Moz Story
    • New Releases
  • Log in
  • Log out
  • Products
    • Moz Pro

      Your all-in-one suite of SEO essentials.

    • Moz Local

      Raise your local SEO visibility with complete local SEO management.

    • STAT

      SERP tracking and analytics for enterprise SEO experts.

    • Moz API

      Power your SEO with our index of over 44 trillion links.

    • Compare SEO Products

      See which Moz SEO solution best meets your business needs.

    • Moz Data

      Power your SEO strategy & AI models with custom data solutions.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Free SEO Tools
    • Domain Analysis

      Get top competitive SEO metrics like DA, top pages and more.

    • Keyword Explorer

      Find traffic-driving keywords with our 1.25 billion+ keyword index.

    • Link Explorer

      Explore over 40 trillion links for powerful backlink data.

    • Competitive Research

      Uncover valuable insights on your organic search competitors.

    • MozBar

      See top SEO metrics for free as you browse the web.

    • More Free SEO Tools

      Explore all the free SEO tools Moz has to offer.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Learn SEO
    • Beginner's Guide to SEO

      The #1 most popular introduction to SEO, trusted by millions.

    • SEO Learning Center

      Broaden your knowledge with SEO resources for all skill levels.

    • On-Demand Webinars

      Learn modern SEO best practices from industry experts.

    • How-To Guides

      Step-by-step guides to search success from the authority on SEO.

    • Moz Academy

      Upskill and get certified with on-demand courses & certifications.

    • SEO Q&A

      Insights & discussions from an SEO community of 500,000+.

    Unlock flexible pricing & new endpoints
    Moz API

    Unlock flexible pricing & new endpoints

    Find your plan
  • Blog
  • Why Moz
    • Small Business Solutions

      Uncover insights to make smarter marketing decisions in less time.

    • Agency Solutions

      Earn & keep valuable clients with unparalleled data & insights.

    • Enterprise Solutions

      Gain a competitive edge in the ever-changing world of search.

    • The Moz Story

      Moz was the first & remains the most trusted SEO company.

    • Case Studies

      Explore how Moz drives ROI with a proven track record of success.

    • New Releases

      Get the scoop on the latest and greatest from Moz.

    Surface actionable competitive intel
    New Feature

    Surface actionable competitive intel

    Learn More
  • Log in
    • Moz Pro
    • Moz Local
    • Moz Local Dashboard
    • Moz API
    • Moz API Dashboard
    • Moz Academy
  • Avatar
    • Moz Home
    • Notifications
    • Account & Billing
    • Manage Users
    • Community Profile
    • My Q&A
    • My Videos
    • Log Out

The Moz Q&A Forum

  • Forum
  • Questions
  • Users
  • Ask the Community

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

  1. Home
  2. SEO Tactics
  3. Intermediate & Advanced SEO
  4. Could you use a robots.txt file to disalow a duplicate content page from being crawled?

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Could you use a robots.txt file to disalow a duplicate content page from being crawled?

Intermediate & Advanced SEO
5
11
2.9k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as question
Log in to reply
This topic has been deleted. Only users with question management privileges can see it.
  • gregelwell
    gregelwell last edited by Jun 4, 2012, 4:49 PM

    A website has duplicate content pages to make it easier for users to find the information from a couple spots in the site navigation. Site owner would like to keep it this way without hurting SEO.

    I've thought of using the robots.txt file to disallow search engines from crawling one of the pages. Would you think this is a workable/acceptable solution?

    1 Reply Last reply Reply Quote 0
    • KyleChamp
      KyleChamp @gregelwell last edited by Jun 4, 2012, 9:15 PM Jun 4, 2012, 9:15 PM

      Yeah, sorry for the confusion. I put the tag on all the pages (Original and Duplicate). I sent you a PM with another good article on Rel canonical tag

      1 Reply Last reply Reply Quote 0
      • gregelwell
        gregelwell @Dr-Pete last edited by Jun 4, 2012, 7:52 PM Jun 4, 2012, 7:52 PM

        Peter, Thanks for the clarification.

        1 Reply Last reply Reply Quote 0
        • Dr-Pete
          Dr-Pete Staff @anthonytjm last edited by Jun 4, 2012, 6:36 PM Jun 4, 2012, 6:36 PM

          Generally agree, although I'd just add that Robots.txt also isn't so great at removing content that's already been indexed (it's better at prevention). So, I find that it's not just not ideal - it sometimes doesn't even work in these cases.

          Rel-canonical is generally a good bet, and it should go on the duplicate (you can actually put it on both, although it's not necessary).

          gregelwell 1 Reply Last reply Jun 4, 2012, 7:52 PM Reply Quote 1
          • gregelwell
            gregelwell @gregelwell last edited by Jun 4, 2012, 6:23 PM Jun 4, 2012, 6:23 PM

            Next time I'll read the reference links better 🙂

            Thank you!

            1 Reply Last reply Reply Quote 0
            • anthonytjm
              anthonytjm @gregelwell last edited by Jun 4, 2012, 6:02 PM Jun 4, 2012, 6:02 PM

              per google webmaster tools:

              If Google knows that these pages have the same content, we may index only one version for our search results. Our algorithms select the page we think best answers the user's query. Now, however, users can specify a canonical page to search engines by adding a element with the attribute rel="canonical" to the section of the non-canonical version of the page. Adding this link and attribute lets site owners identify sets of identical content and suggest to Google: "Of all these pages with identical content, this page is the most useful. Please prioritize it in search results."

              1 Reply Last reply Reply Quote 0
              • gregelwell
                gregelwell @KyleChamp last edited by Jun 4, 2012, 5:41 PM Jun 4, 2012, 5:41 PM

                Thanks Kyle. Anthony had a similar view on using the rel canonical tag. I'm just curious about adding it to both the original page or duplicate page? Or both?

                Thanks,

                Greg

                KyleChamp 1 Reply Last reply Jun 4, 2012, 9:15 PM Reply Quote 0
                • gregelwell
                  gregelwell @anthonytjm last edited by Jun 4, 2012, 5:37 PM Jun 4, 2012, 5:37 PM

                  Anthony, Thanks for your response. See Kyle, he also felt using the rel canonical tag was the best thing to do. However he seemed to think you'd put it on the original page - the one you want to rank for. And you're suggesting putting on the duplicate page. Should it be added to both while specifying which page is the 'original'?

                  Thanks!

                  Greg

                  anthonytjm gregelwell 2 Replies Last reply Jun 4, 2012, 6:23 PM Reply Quote 0
                  • Adam.Whittles
                    Adam.Whittles last edited by Jun 4, 2012, 5:33 PM Jun 4, 2012, 5:33 PM

                    I'm not sure I understand why the site owner seems to think that the duplicate content is necessary?

                    If I was in your situation I would be trying to convince the client to remove the duplicate content from their site, rather than trying to find a way around it.

                    If the information is difficult to find then this may be due to a problem with the site architecture. If the site does not flow well enough for visitors to find the information they need, then perhaps a site redesign is necessary.

                    1 Reply Last reply Reply Quote 0
                    • anthonytjm
                      anthonytjm last edited by Jun 4, 2012, 6:23 PM Jun 4, 2012, 5:11 PM

                      Well, the answer would be yes and no. A robots.txt file would stop the bots from indexing the page, but links from other pages in site to that non indexed page could therefor make it crawlable and then indexed. AS posted in google webmaster tools here:

                      "You need a robots.txt file only if your site includes content that you don't want search engines to index. If you want search engines to index everything in your site, you don't need a robots.txt file (not even an empty one).

                      While Google won't crawl or index the content of pages blocked by robots.txt, we may still index the URLs if we find them on other pages on the web. As a result, the URL of the page and, potentially, other publicly available information such as anchor text in links to the site, or the title from the Open Directory Project (www.dmoz.org), can appear in Google search results."

                      I think the best way to avoid any conflict is applying the rel="canonical"  tag to each duplicate page that you don't want indexed.

                      You can find more info on rel canonical here

                      Hope this helps out some.

                      gregelwell Dr-Pete 2 Replies Last reply Jun 4, 2012, 6:36 PM Reply Quote 2
                      • KyleChamp
                        KyleChamp last edited by Jun 4, 2012, 5:06 PM Jun 4, 2012, 5:06 PM

                        The best way would be to use the Rel canonical tag

                        On the page you would like to rank for put the Rel canonical tag in

                        This lets google know that this is the original page.

                        Check out this link posted by Rand about the Rel canonical tag [http://www.seomoz.org/blog/canonical-url-tag-the-most-important-advancement-in-seo-practices-since-sitemaps](http://www.seomoz.org/blog/canonical-url-tag-the-most-important-advancement-in-seo-practices-since-sitemaps)

                        gregelwell 1 Reply Last reply Jun 4, 2012, 5:41 PM Reply Quote 2
                        • 1 / 1
                        1 out of 11
                        • First post
                          1/11
                          Last post

                        Got a burning SEO question?

                        Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                        Start my free trial


                        Browse Questions

                        Explore more categories

                        • Moz Tools

                          Chat with the community about the Moz tools.

                        • SEO Tactics

                          Discuss the SEO process with fellow marketers

                        • Community

                          Discuss industry events, jobs, and news!

                        • Digital Marketing

                          Chat about tactics outside of SEO

                        • Research & Trends

                          Dive into research and trends in the search industry.

                        • Support

                          Connect on product support and feature requests.

                        • See all categories

                        Related Questions

                        • MJTrevens

                          Can I use duplicate content in different US cities without hurting SEO?

                          So, I have major concerns with this plan. My company has hundreds of facilities located all over the country. Each facility has it's own website. We have a third party company working to build a content strategy for us. What they came up with is to create a bank of content specific to each service line. If/when any facility offers that service, they then upload the content for that service line to that facility website. So in theory, you might have 10-12 websites all in different cities, with the same content for a service. They claim "Google is smart, it knows its content all from the same company, and because it's in different local markets, it will still rank." My contention is that duplicate content is duplicate content, and unless it is "localize" it, Google is going to prioritize one page of it and the rest will get very little exposure in the rankings no matter where you are. I could be wrong, but I want to be sure we aren't shooting ourselves in the foot with this strategy, because it is a major major undertaking and too important to go off in the wrong direction. SEO Experts, your help is genuinely appreciated!

                          Intermediate & Advanced SEO | Jun 27, 2017, 7:08 PM | MJTrevens
                          1
                        • BeckyKey

                          Category Pages & Content

                          Hi Does anyone have any great examples of an ecommerce site which has great content on category pages or product listing pages? Thanks!

                          Intermediate & Advanced SEO | Aug 16, 2016, 12:21 PM | BeckyKey
                          1
                        • iQi

                          Duplicate content on recruitment website

                          Hi everyone, It seems that Panda 4.2 has hit some industries more than others. I just started working on a website, that has no manual action, but the organic traffic has dropped massively in the last few months. Their external linking profile seems to be fine, but I suspect usability issues, especially the duplication may be the reason. The website is a recruitment website in a specific industry only. However, they posts jobs for their clients, that can be very similar, and in the same time they can have 20 jobs with the same title and very similar job descriptions. The website currently have over 200 pages with potential duplicate content. Additionally, these jobs get posted on job portals, with the same content (Happens automatically through a feed). The questions here are: How bad would this be for the website usability, and would it be the reason the traffic went down? Is this the affect of Panda 4.2 that is still rolling What can be done to resolve these issues? Thank you in advance.

                          Intermediate & Advanced SEO | Oct 16, 2015, 5:57 PM | iQi
                          0
                        • EndeR-

                          No-index pages with duplicate content?

                          Hello, I have an e-commerce website selling about 20 000 different products. For the most used of those products, I created unique high quality content. The content has been written by a professional player that describes how and why those are useful which is of huge interest to buyers. It would cost too much to write that high quality content for 20 000 different products, but we still have to sell them. Therefore, our idea was to no-index the products that only have the same copy-paste descriptions all other websites have. Do you think it's better to do that or to just let everything indexed normally since we might get search traffic from those pages? Thanks a lot for your help!

                          Intermediate & Advanced SEO | Jul 30, 2014, 4:18 PM | EndeR-
                          0
                        • monster99

                          How to Disallow Tag Pages With Robot.txt

                          Hi i have a site which i'm dealing with that has tag pages for instant - http://www.domain.com/news/?tag=choice How can i exclude these tag pages (about 20+ being crawled and indexed by the search engines with robot.txt Also sometimes they're created dynamically so i want something which automatically excludes tage pages from being crawled and indexed. Any suggestions? Cheers, Mark

                          Intermediate & Advanced SEO | Nov 1, 2012, 11:24 PM | monster99
                          0
                        • seo123456

                          Using 2 wildcards in the robots.txt file

                          I have a URL string which I don't want to be indexed. it includes the characters _Q1 ni the middle of the string. So in the robots.txt can I use 2 wildcards in the string to take out all of the URLs with that in it?  So something like /_Q1.  Will that pickup and block every  URL with those characters in the string? Also, this is not directly of the root, but in a secondary directory, so .com/.../_Q1.  So do I have to format the robots.txt as //_Q1* as it will be in the second folder or just using /_Q1 will pickup everything no matter what folder it is on? Thanks.

                          Intermediate & Advanced SEO | Jan 20, 2012, 5:25 AM | seo123456
                          0
                        • Peter264

                          NOINDEX listing pages: Page 2, Page 3... etc?

                          Would it be beneficial to NOINDEX category listing pages except for the first page.  For example on this site: http://flyawaysimulation.com/downloads/101/fsx-missions/ Has lots of pages such as Page 2, Page 3, Page 4... etc: http://www.google.com/search?q=site%3Aflyawaysimulation.com+fsx+missions Would there be any SEO benefit of NOINDEX on these pages?  Of course, FOLLOW is default, so links would still be followed and juice applied. Your thoughts and suggestions are much appreciated.

                          Intermediate & Advanced SEO | Dec 6, 2011, 3:18 PM | Peter264
                          0
                        • joseph.chambers

                          Duplicate Content | eBay

                          My client is generating templates for his eBay template based on content he has on his eCommerce platform. I'm 100% sure this will cause duplicate content issues. My question is this.. and I'm not sure where eBay policy stands with this but adding the canonical tag to the template.. will this work if it's coming from a different page i.e. eBay? Update: I'm not finding any information regarding this on the eBay policy's: http://ocs.ebay.com/ws/eBayISAPI.dll?CustomerSupport&action=0&searchstring=canonical So it does look like I can have rel="canonical" tag in custom eBay templates but I'm concern this can be considered: "cheating" since rel="canonical is actually a 301 but as this says: http://googlewebmastercentral.blogspot.com/2009/12/handling-legitimate-cross-domain.html it's legitimately duplicate content. The question is now: should I add it or not? UPDATE seems eBay templates are embedded in a iframe but the snap shot on google actually shows the template. This makes me wonder how they are handling iframes now. looking at http://www.webmaster-toolkit.com/search-engine-simulator.shtml does shows the content inside the iframe. Interesting. Anyone else have feedback?

                          Intermediate & Advanced SEO | Dec 9, 2014, 11:02 AM | joseph.chambers
                          1

                        Get started with Moz Pro!

                        Unlock the power of advanced SEO tools and data-driven insights.

                        Start my free trial
                        Products
                        • Moz Pro
                        • Moz Local
                        • Moz API
                        • Moz Data
                        • STAT
                        • Product Updates
                        Moz Solutions
                        • SMB Solutions
                        • Agency Solutions
                        • Enterprise Solutions
                        Free SEO Tools
                        • Domain Authority Checker
                        • Link Explorer
                        • Keyword Explorer
                        • Competitive Research
                        • Brand Authority Checker
                        • Local Citation Checker
                        • MozBar Extension
                        • MozCast
                        Resources
                        • Blog
                        • SEO Learning Center
                        • Help Hub
                        • Beginner's Guide to SEO
                        • How-to Guides
                        • Moz Academy
                        • API Docs
                        About Moz
                        • About
                        • Team
                        • Careers
                        • Contact
                        Why Moz
                        • Case Studies
                        • Testimonials
                        Get Involved
                        • Become an Affiliate
                        • MozCon
                        • Webinars
                        • Practical Marketer Series
                        • MozPod
                        Connect with us

                        Contact the Help team

                        Join our newsletter
                        Moz logo
                        © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                        • Accessibility
                        • Terms of Use
                        • Privacy

                        Looks like your connection to Moz was lost, please wait while we try to reconnect.