undefined
Skip to content
Moz logo Menu open Menu close
  • Products
    • Moz Pro
    • Moz Pro Home
    • Moz Local
    • Moz Local Home
    • STAT
    • Moz API
    • Moz API Home
    • Compare SEO Products
    • Moz Data
  • Free SEO Tools
    • Domain Analysis
    • Keyword Explorer
    • Link Explorer
    • Competitive Research
    • MozBar
    • More Free SEO Tools
  • Learn SEO
    • Beginner's Guide to SEO
    • SEO Learning Center
    • Moz Academy
    • SEO Q&A
    • Webinars, Whitepapers, & Guides
  • Blog
  • Why Moz
    • Agency Solutions
    • Enterprise Solutions
    • Small Business Solutions
    • Case Studies
    • The Moz Story
    • New Releases
  • Log in
  • Log out
  • Products
    • Moz Pro

      Your all-in-one suite of SEO essentials.

    • Moz Local

      Raise your local SEO visibility with complete local SEO management.

    • STAT

      SERP tracking and analytics for enterprise SEO experts.

    • Moz API

      Power your SEO with our index of over 44 trillion links.

    • Compare SEO Products

      See which Moz SEO solution best meets your business needs.

    • Moz Data

      Power your SEO strategy & AI models with custom data solutions.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Free SEO Tools
    • Domain Analysis

      Get top competitive SEO metrics like DA, top pages and more.

    • Keyword Explorer

      Find traffic-driving keywords with our 1.25 billion+ keyword index.

    • Link Explorer

      Explore over 40 trillion links for powerful backlink data.

    • Competitive Research

      Uncover valuable insights on your organic search competitors.

    • MozBar

      See top SEO metrics for free as you browse the web.

    • More Free SEO Tools

      Explore all the free SEO tools Moz has to offer.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Learn SEO
    • Beginner's Guide to SEO

      The #1 most popular introduction to SEO, trusted by millions.

    • SEO Learning Center

      Broaden your knowledge with SEO resources for all skill levels.

    • On-Demand Webinars

      Learn modern SEO best practices from industry experts.

    • How-To Guides

      Step-by-step guides to search success from the authority on SEO.

    • Moz Academy

      Upskill and get certified with on-demand courses & certifications.

    • MozCon

      Save on Early Bird tickets and join us in London or New York City

    Unlock flexible pricing & new endpoints
    Moz API

    Unlock flexible pricing & new endpoints

    Find your plan
  • Blog
  • Why Moz
    • Small Business Solutions

      Uncover insights to make smarter marketing decisions in less time.

    • Agency Solutions

      Earn & keep valuable clients with unparalleled data & insights.

    • Enterprise Solutions

      Gain a competitive edge in the ever-changing world of search.

    • The Moz Story

      Moz was the first & remains the most trusted SEO company.

    • Case Studies

      Explore how Moz drives ROI with a proven track record of success.

    • New Releases

      Get the scoop on the latest and greatest from Moz.

    Surface actionable competitive intel
    New Feature

    Surface actionable competitive intel

    Learn More
  • Log in
    • Moz Pro
    • Moz Local
    • Moz Local Dashboard
    • Moz API
    • Moz API Dashboard
    • Moz Academy
  • Avatar
    • Moz Home
    • Notifications
    • Account & Billing
    • Manage Users
    • Community Profile
    • My Q&A
    • My Videos
    • Log Out

The Moz Q&A Forum

  • Forum
  • Questions
  • Users
  • Ask the Community

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

  1. Home
  2. SEO Tactics
  3. Technical SEO
  4. Internal search : rel=canonical vs noindex vs robots.txt

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Internal search : rel=canonical vs noindex vs robots.txt

Technical SEO
3
9
5.7k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as question
Log in to reply
This topic has been deleted. Only users with question management privileges can see it.
  • JohannCR
    JohannCR last edited by Apr 12, 2012, 1:51 PM

    Hi everyone,

    I have a website with a lot of internal search results pages indexed. I'm not asking if they should be indexed or not, I  know they should not according to Google's guidelines. And they make a bunch of duplicated pages so I want to solve this problem.

    The thing is, if I noindex them, the site is gonna lose a non-negligible chunk of traffic : nearly 13% according to google analytics !!!

    I thought of blocking them in robots.txt. This solution would not keep them out of the index. But the pages appearing in GG SERPS would then look empty (no title, no description), thus their CTR would plummet and I would lose a bit of traffic too...

    The last idea I had was to use a rel=canonical tag pointing to the original search page (that is empty, without results), but it would probably have the same effect as noindexing them, wouldn't it ? (never tried so I'm not sure of this)

    Of course I did some research on the subject, but each of my finding recommanded one of the 3 methods only ! One even recommanded noindex+robots.txt block which is stupid because the noindex would then be useless...

    Is there somebody who can tell me which option is the best to keep this traffic ?

    Thanks a million

    1 Reply Last reply Reply Quote 0
    • Dr-Pete
      Dr-Pete Staff @JohannCR last edited by Apr 13, 2012, 7:51 PM Apr 13, 2012, 7:13 PM

      Yeah, normally I'd say to NOINDEX those user-generated search URLs, but since they're collecting traffic, I'd have to side with Alan - a canonical may be your best bet here. Technically, they aren't "true" duplicates, but you don't want the 1K pages in the index, you don't want to lose the traffic (which NOINDEX would do), and you don't want to kill those pages for users (which a 301 would do).

      Only thing I'd add is that, if some of these pages are generating most of the traffic (e.g. 10 pages = 90% of the traffic for these internal searches), you might want to make those permanent pages, like categories in your site architecture, and then 301 the custom URLs to those permanent pages.

      1 Reply Last reply Reply Quote 1
      • JohannCR
        JohannCR @Dr-Pete last edited by Apr 13, 2012, 5:29 PM Apr 13, 2012, 5:29 PM

        Huh not sure since I'm not a developer (and didn't work on that website dev) but I'd say all of the above^^. If useful, here are their url structure, there's two kind :

        • /searchpage.htm?action=search&pagenumber=xx&query=product+otherterms

        So I guess they are generated when a user makes a search

        paginated (about 15 pages generally),

        and I can approximately know how much they are duplicates, I can tell some are probably overlapping when there's a lot of variations for the product. There are just a few complete duplicates (when the product searched is the same with different added terms, doesn't happen a lot in this list).

        • /searchpage-searchterm-addedterm-number.htm

        Those I find surprising, I don't know if they are pages generated with a fixed url, or if they are rewritten (Haven't looked at the htaccess yet, but I will, god I have a headache just thinking about reading that thing lol)

        There's about a thousand of them all (from GGanalytics, about half of each sort, and nearly all are indexed by Google), on a website with about 12 thou total in pages.

        Maybe the traffic loss will be compensated by the removed competition between those search pages and the product pages (and the rel=canonical is surely way less brutal than a noindex for that matter), but without experience in these kind of situations it's hard to make a decision...

        Really appreciate you guys taking the time to help !

        Dr-Pete 1 Reply Last reply Apr 13, 2012, 7:13 PM Reply Quote 0
        • Dr-Pete
          Dr-Pete Staff last edited by Apr 13, 2012, 4:14 PM Apr 13, 2012, 4:14 PM

          Alan's absolutely right about how canonical works, but I just want to clarify something - what about these pages is duplicated? In other words, are these regular searches (like product searches) with duplicate URLs, are these paginated searches (with page 2, 3, etc. that appear thin), or are these user-generated searches spinning out into new search pages (not exact duplicates but overlapping)? The solutions can vary a bit with the problem, and internal search is tricky.

          JohannCR 1 Reply Last reply Apr 13, 2012, 5:29 PM Reply Quote 1
          • AlanMosley
            AlanMosley @JohannCR last edited by Apr 13, 2012, 12:23 PM Apr 13, 2012, 12:23 PM

            Just one more point, a canonical is just a hint to the search engines, it is not a directive, so if they think that the pages should not be merged, they will ignore them, so in that way, they may make the decision for you

            1 Reply Last reply Reply Quote 0
            • JohannCR
              JohannCR @JohannCR last edited by Apr 13, 2012, 12:20 PM Apr 13, 2012, 12:20 PM

              Not a lot of real duplicates, they're more alike, and the most visited are unique, so I'll keep the most important ones and just toss a few duplicates.

              Thanks a lot for your help, problem solved !

              1 Reply Last reply Reply Quote 0
              • AlanMosley
                AlanMosley @JohannCR last edited by Apr 13, 2012, 7:14 PM Apr 13, 2012, 12:05 PM

                no not like a noindex. more like a merge.

                will it make you rank for many keywords? not necessarly, as a page all about blue widgets is going to rank higher then a page has many different subjects including blue widgets.

                A canonical is really for duplicate content, or very alike content.

                So you have to decide what your page is, is it duplicate or alike content, or is it unique?

                if the pages are unique then do nothing, let them rank. if yopu think they are alike, then use a canonical. if there are only a few, then i would not worry either way.

                if you decide they are unique, they I would look at making the page title unique also, maybe even description too.

                1 Reply Last reply Reply Quote 3
                • JohannCR
                  JohannCR @AlanMosley last edited by Apr 13, 2012, 11:56 AM Apr 13, 2012, 11:56 AM

                  Thanks for your answer

                  Ok you're saying indeed it will act like a noindex over time.

                  So if one of the result page would have ranked for a particular query, it will not rank any more, like with a noindex => it will lose the 13% of traffic it generated...

                  Otherwise it would be too easy to make a page rank for the keywords used in a bunch of other pages that refer to it via rel=canonical... wouldn't it ?

                  I'm starting to think I can't do anything... Maybe just noindex a bunch of them that cause duplicates, and leave the rest in the index.

                  AlanMosley JohannCR 3 Replies Last reply Apr 13, 2012, 12:23 PM Reply Quote 0
                  • AlanMosley
                    AlanMosley last edited by Apr 13, 2012, 11:36 AM Apr 13, 2012, 11:36 AM

                    Rel=canonical is tge way to go, it will tell the search results that all credit for all diffrent urls go to the original search page. eventual onl;y the original search page will exist in the index.

                    JohannCR 1 Reply Last reply Apr 13, 2012, 11:56 AM Reply Quote 0
                    • 1 / 1
                    1 out of 9
                    • First post
                      1/9
                      Last post

                    Got a burning SEO question?

                    Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                    Start my free trial


                    Browse Questions

                    Explore more categories

                    • Moz Tools

                      Chat with the community about the Moz tools.

                    • SEO Tactics

                      Discuss the SEO process with fellow marketers

                    • Community

                      Discuss industry events, jobs, and news!

                    • Digital Marketing

                      Chat about tactics outside of SEO

                    • Research & Trends

                      Dive into research and trends in the search industry.

                    • Support

                      Connect on product support and feature requests.

                    • See all categories

                    Related Questions

                    • lauralou82

                      Robots.txt in subfolders and hreflang issues

                      A client recently rolled out their UK business to the US. They decided to deploy with 2 WordPress installations: UK site - https://www.clientname.com/uk/ - robots.txt location: UK site - https://www.clientname.com/uk/robots.txt
                      US site - https://www.clientname.com/us/ - robots.txt location: UK site - https://www.clientname.com/us/robots.txt We've had various issues with /us/ pages being indexed in Google UK, and /uk/ pages being indexed in Google US. They have the following hreflang tags across all pages: We changed the x-default page to .com 2 weeks ago (we've tried both /uk/ and /us/ previously). Search Console says there are no hreflang tags at all. Additionally, we have a robots.txt file on each site which has a link to the corresponding sitemap files, but when viewing the robots.txt tester on Search Console, each property shows the robots.txt file for https://www.clientname.com only, even though when you actually navigate to this URL (https://www.clientname.com/robots.txt) you’ll get redirected to either https://www.clientname.com/uk/robots.txt or https://www.clientname.com/us/robots.txt depending on your location. Any suggestions how we can remove UK listings from Google US and vice versa?

                      Technical SEO | Apr 26, 2018, 1:35 PM | lauralou82
                      0
                    • znotes

                      Do you need a canonical tag for search and filter pages?

                      Hi Moz Community, We've been implementing new canonical tags for our category pages but I have a question about pages that are found via search and our filtering options. Would we still need a canonical tag for pages that show up in search + a filter option if it only lists one page of items? Example below. www.uncommongoods.com/search.html/find/?q=dog&exclusive=1 Thanks!

                      Technical SEO | Oct 7, 2016, 9:18 PM | znotes
                      0
                    • SeaCandyTackle

                      Rel=canonical Weebly

                      My problem is with my website as it says I have duplicate page titles and contents because of a /index.html. It says the duplicate content is due to the fact that my homepage on my website is www.seacandytackle.com but it is also www.seacandytackle.com/index.html because I use weebly. How can I use the tag to fix this? It won't let me do a 301 redirect because it is a home page. How can I fix this? What code would I have to use and which url? Also it says that I have duplicate page content between http://www.seacandytackle.com/index.html and http://www.seacandytackle.comhttp://www.seacandytackle.com but I don't recall having any page that looks like http://www.seacandytackle.com http://www.seacandytackle.com from weebly. How can I fix this issue as well? Thank you for any help. Step by step implementation would be particularly helpful in using the rel= tags to fix these duplicate issues.

                      Technical SEO | Oct 30, 2015, 7:00 PM | SeaCandyTackle
                      0
                    • AlexSG

                      Rel canonical between mirrored domains

                      Hi all & happy new near! I'm new to SEO and could do with a spot of advice: I have a site that has several domains that mirror it (not good, I know...)  So www.site.com, www.site.edu.sg, www.othersite.com all serve up the same content.  I was planning to use rel="canonical" to avoid the duplication but I have a concern: Currently several of these mirrors rank - one, the .com ranks #1 on local google search for some useful keywords. the .edu.sg also shows up as #9 for a dirrerent page. In some cases I have multiple mirrors showing up on a specific serp. I would LIKE to rel canonical everything to the local edu.sg domain since this is most representative of the fact that the site is for a school in Singapore but...
                      -The .com is listed in DMOZ (this used to be important) and none of the volunteers there ever respoded to requests to update it to the .edu.sg
                      -The .com ranks higher than the com.sg page for non-local search so I am guessing google has some kind of algorithm to mark down obviosly local domains in other geographic locations Any opinions on this? Should I rel canonical the .com to the .edu.sg or vice versa? I appreciate any advice or opinion before I pull the trigger and end up shooting myself in the foot! Best regards from Singapore!

                      Technical SEO | Jan 18, 2014, 9:07 PM | AlexSG
                      0
                    • Mark_Ginsberg

                      Blocking Affiliate Links via robots.txt

                      Hi, I work with a client who has a large affiliate network pointing to their domain which is a large part of their inbound marketing strategy. All of these links point to a subdomain of affiliates.example.com, which then redirects the links through a 301 redirect to the relevant target page for the link. These links have been showing up in Webmaster Tools as top linking domains and also in the latest downloaded links reports. To follow guidelines and ensure that these links aren't counted by Google for either positive or negative impact on the site, we have added a block on the robots.txt of the affiliates.example.com subdomain, blocking search engines from crawling the full subddomain. The robots.txt file is the following code: User-agent: * Disallow: / We have authenticated the subdomain with Google Webmaster Tools and made certain that Google can reach and read the robots.txt file. We know they are being blocked from reading the affiliates subdomain. However, we added this affiliates subdomain block a few weeks ago to the robots.txt, but links are still showing up in the latest downloads report as first being discovered after we added the block. It's been a few weeks already, and we want to make sure that the block was implemented properly and that these links aren't being used to negatively impact the site. Any suggestions or clarification would be helpful - if the subdomain is being blocked for the search engines, why are the search engines following the links and reporting them in the www.example.com subdomain GWMT account as latest links. And if the block is implemented properly, will the total number of links pointing to our site  as reported in the links to your site section be reduced, or does this not have an impact on that figure?From a development standpoint, it's a much easier fix for us to adjust the robots.txt file than to change the affiliate linking connection from a 301 to a 302, which is why we decided to go with this option.Any help you can offer will be greatly appreciated.Thanks,Mark

                      Technical SEO | Nov 19, 2013, 7:37 PM | Mark_Ginsberg
                      0
                    • MickEdwards

                      Adding multi-language sitemaps to robots.txt

                      I am working on a revamped multi-language site that has moved to Magento.  Each language runs off the core coding so there are no sub-directories per language. The developer has created sitemaps which have been uploaded to their respective GWT accounts.  They have placed the sitemaps in new directories such as: /sitemap/uk/sitemap.xml /sitemap/de/sitemap.xml I want to add the sitemaps to the robots.txt but can't figure out how to do it.  Also should they have placed the sitemaps in a single location with the file identifying each language: /sitemap/uk-sitemap.xml /sitemap/de-sitemap.xml What is the cleanest way of handling these sitemaps and can/should I get them on robots.txt?

                      Technical SEO | Sep 17, 2013, 11:23 AM | MickEdwards
                      0
                    • OMGPyrmont

                      Ok to internally link to pages with NOINDEX?

                      I manage a directory site with hundreds of thousands of indexed pages. I want to remove a significant number of these pages from the index using NOINDEX and have 2 questions about this: 1. Is NOINDEX the most effective way to remove large numbers of pages from Google's index? 2. The IA of our site means that we will have thousands of internal links pointing to these noindexed pages if we make this change. Is it a problem to link to pages with a noindex directive on them? Thanks in advance for all responses.

                      Technical SEO | Jul 12, 2013, 4:15 AM | OMGPyrmont
                      0
                    • ahockley

                      Google insists robots.txt is blocking... but it isn't.

                      I recently launched a new website.  During development, I'd enabled the option in WordPress to prevent search engines from indexing the site. When the site went public (over 24 hours ago), I cleared that option. At that point, I added a specific robots.txt file that only disallowed a couple directories of files.  You can view the robots.txt at http://photogeardeals.com/robots.txt Google (via Webmaster tools) is insisting that my robots.txt file contains a "Disallow: /" on line 2 and that it's preventing Google from indexing the site and preventing me from submitting a sitemap.  These errors are showing both in the sitemap section of Webmaster tools as well as the Blocked URLs section. Bing's webmaster tools are  able to read the site and sitemap just fine. Any idea why Google insists I'm disallowing everything even after telling it to re-fetch?

                      Technical SEO | Apr 2, 2013, 1:54 PM | ahockley
                      0

                    Get started with Moz Pro!

                    Unlock the power of advanced SEO tools and data-driven insights.

                    Start my free trial
                    Products
                    • Moz Pro
                    • Moz Local
                    • Moz API
                    • Moz Data
                    • STAT
                    • Product Updates
                    Moz Solutions
                    • SMB Solutions
                    • Agency Solutions
                    • Enterprise Solutions
                    Free SEO Tools
                    • Domain Authority Checker
                    • Link Explorer
                    • Keyword Explorer
                    • Competitive Research
                    • Brand Authority Checker
                    • Local Citation Checker
                    • MozBar Extension
                    • MozCast
                    Resources
                    • Blog
                    • SEO Learning Center
                    • Help Hub
                    • Beginner's Guide to SEO
                    • How-to Guides
                    • Moz Academy
                    • API Docs
                    About Moz
                    • About
                    • Team
                    • Careers
                    • Contact
                    Why Moz
                    • Case Studies
                    • Testimonials
                    Get Involved
                    • Become an Affiliate
                    • MozCon
                    • Webinars
                    • Practical Marketer Series
                    • MozPod
                    Connect with us

                    Contact the Help team

                    Join our newsletter
                    Moz logo
                    © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                    • Accessibility
                    • Terms of Use
                    • Privacy

                    Looks like your connection to Moz was lost, please wait while we try to reconnect.