Should I set up a disallow in the robots.txt for catalog search results?
-
When the crawl diagnostics came back for my site its showing around 3,000 pages of duplicate content. Almost all of them are of the catalog search results page. I also did a site search on Google and they have most of the results pages in their index too. I think I should just disallow the bots in the /catalogsearch/ sub folder, but I'm not sure if this will have any negative effect?
-
One step at a time = long term success. I wish you the best with it Jordan.
-
Thanks Alan, you are right this site has quite a long way to go. The first crawl was just finished and I notice that the most errors were due to dupe content so I decided I would try and tackle that first. Thank you for all the pointers, I will be taking a look at all those as soon as I can.
-
Totally agree with Alan, it can cause circular navigation problems for crawlers too.
-
Jordan,
Others might have a different view, however that's exactly what I recommend to clients. but only if you've got other html link based ways for bots to get to all the content in a direct manner, and have a good sitemap.xml file to reinforce that.
I am happy to see that you have a sound overall site architecture, however I see no robots.txt file at your root so I'm not sure what's up with that. Also your sitemap.xml file only has 43 URLs in it. that's a problem not because google can't find content by other means, it's just that I've found Google likes that reinforcement, and Bing especially does a better job discovering content with a proper sitemap.xml submitted through their webmaster system (they're less efficient at discovering content by other means).
I'd also suggest you have a big push ahead in dealing with near-duplicate content.
For example:
http://www.durafaucet.com/mk850-orb.html
http://www.durafaucet.com/kitchen-faucets/mk850.html
Sure, these are unique products. Except there's already so little unique content on either page that the common content compounded by the site-wide replication of top, sidebar and footer content means the total weight of uniqueness is on the very minor end of the spectrum.
And then there's the issue of a complete lack of inbound link authority - OpenSiteExplorer.org might be wrong, but currently shows almost no inbound links. Not only will you need inbound links to the home page, but also to as many inner pages as is realistic in terms of implementation capabilities go. This is especially true for category level pages. (including a variety of inbound link anchor text - brand, domain, keyword phrase and generic text).
So if you don't address those type of issues, removing all the dupes that show up in search now won't result in as much long-term value as you'll need.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Set Up htaccess File
Looking for expert help (willing to pay) to set up a proper htaccess file. I'm having an issue as the site has a subdomain at secure.domain.com and has php extensions there. I tried a couple recommended code sets but it seems to be a mess. The site is working properly but this may be causing rankings issues. It's coded in pure HTML and PHP, no Wordpress stuff.
Technical SEO | | execubob
The delete www causes the secure side to fail. The delete html extensions causes the php extensions to fail.0 -
What should I consider before setting up a sub domain?
Morning all! We've just been approached by IT. They've been asked to develop an online 'portal' where clients can upload and download materials. IT will be developing a portal that sits on the company network perimeter (hosted on our internal servers). The concept is that 3<sup>rd</sup> parties can get and update information in regards to progressing cases, the first use will be for agencies who will retrieve records via the portal and then post reports after a consultation. however I would like to have an automatic link to forward to the portal from the web address: oursite.com/dave We will look to create robot.txt and anything else to prevent from listings/indexes. Does any of the above mess with your SEO? The Directors have asked if they can have this on a sub-domain of our site. Is this wise? And, are there any major SEO considerations for my team to worry about? Better still, have any of you had to deal with this before? If so, what happened? All the best, John
Technical SEO | | Muhammad-Isap0 -
Confirming Robots.txt code deep Directories
Just want to make sure I understand exactly what I am doing If I place this in my Robots.txt Disallow: /root/this/that By doing this I want to make sure that I am ONLY blocking the directory /that/ and anything in front of that. I want to make sure that /root/this/ still stays in the index, its just the that directory I want gone. Am I correct in understanding this?
Technical SEO | | cbielich0 -
Yoast settings help
I could use some real help here in my Yoast settings. I had some great settings before but we switched servers and it looks like we lost all our settings. I've taken some screenshots and I'm hoping someone can help! http://d.pr/i/chNQ http://d.pr/i/51TY http://d.pr/i/io7S http://d.pr/i/nak http://d.pr/i/acon The site is run by a couple guys. Please help!
Technical SEO | | ttb0 -
Wordpress Robots.txt Sitemap submission?
Alright, my question comes directly from this article by SEOmoz http://www.seomoz.org/learn-seo/r... Yes, I have submitted the sitemap to google, bing's webmaster tools and and I want to add the location of our site's sitemaps and does it mean that I erase everything in the robots.txt right now and replace it with? <code>User-agent: * Disallow: Sitemap: http://www.example.com/none-standard-location/sitemap.xml</code> <code>???</code> because Wordpress comes with some default disallows like wp-admin, trackback, plugins. I have also read this, but was wondering if this is the correct way to add sitemap on Wordpress Robots.txt. [http://www.seomoz.org/q/removing-...](http://www.seomoz.org/q/removing-robots-txt-on-wordpress-site-problem) I am using Multisite with Yoast plugin so I have more than one sitemap.xml to submit Do I erase everything in Robots.txt and replace it with how SEOmoz recommended? hmm that sounds not right. like <code> <code>
Technical SEO | | joony2008
<code>User-agent: *
Disallow: </code> Sitemap: http://www.example.com/sitemap_index.xml</code> <code>``` Sitemap: http://www.example.com/sub/sitemap_index.xml ```</code> <code>?????????</code> ```</code>0 -
Pagination and SEO: How do I fix it during search parameters?
Today, I have watched very interesting video on YouTube about Pagination and SEO. I have implemented pagination with rel="next" and rel="prev" on my paginated page. You can get more idea by visit following pages. www.vistastores.com/patio-umbrellas www.vistastores.com/patio-umbrellas?p=2 www.vistastores.com/patio-umbrellas?p=3 I have added NOINDEX FOLLOW attribute to page 2, page 3 and so on. There is simple question from my side. Can I remove NOINDEX FOLLOW attribute from paginated page or not? I have big confusion & issues when paginated URLs contain search parameters. You can get more idea by visiting following URLs. http://www.vistastores.com/patio-umbrellas?dir=asc&order=name&p=2 http://www.vistastores.com/patio-umbrellas?dir=asc&order=name&p=3 What is best suggestion for this kind of pages?
Technical SEO | | CommercePundit0 -
Robots.txt versus sitemap
Hi everyone, Lets say we have a robots.txt that disallows specific folders on our website, but a sitemap submitted in Google Webmaster Tools that lists content in those folders. Who wins? Will the sitemap content get indexed even if it's blocked by robots.txt? I know content that is blocked by robot.txt can still get indexed and display a URL if Google discovers it via a link so I'm wondering if that would happen in this scenario too. Thanks!
Technical SEO | | anthematic0 -
How do I use the Robots.txt "disallow" command properly for folders I don't want indexed?
Today's sitemap webinar made me think about the disallow feature, seems opposite of sitemaps, but it also seems both are kind of ignored in varying ways by the engines. I don't need help semantically, I got that part. I just can't seem to find a contemporary answer about what should be blocked using the robots.txt file. For example, I have folders containing site comps for clients that I really don't want showing up in the SERPS. Is it better to not have these folders on the domain at all? There are also security issues I've heard of that make sense, simply look at a site's robots file to see what they are hiding. It makes it easier to hunt for files when they know the directory the files are contained in. Do I concern myself with this? Another example is a folder I have for my xml sitemap generator. I imagine google isn't going to try to index this or count it as content, so do I need to add folders like this to the disallow list?
Technical SEO | | SpringMountain0