How to exclude URL filter searches in robots.txt
-
When I look through my MOZ reports I can see it's included 'pages' which it shouldn't have included i.e. adding filtering rules such as this one http://www.mydomain.com/brands?color=364&manufacturer=505
How can I exclude all of these filters in the robots.txt? I think it'll be:
Disallow: /*?color=$
Is that the correct syntax with the $ sign in it? Thanks!
-
Unless you're specifically calling out Bing or Baidu... in your Robots.txt file they should follow the same directives as Google so testing with Google's Robots.txt file tester should suffice for all of them.
-
Yes, but what about bing and rest of Search Engine?
-
Adrian,
I agree that there certainly is a right answer to the question posted, as the question asks specifically about one way to manage the issue, being a block of filters in the robots.txt file. What I was getting at is that this may or may not necessarily be the "best" way, and that I'd need to look at your site and your unique situation to figure our which would be the best solution for your needs.
It is very likely that with these parameters a robots.txt file block is the best approach, assuming the parameters aren't added by default into category page or category pagination page navigational links, as then it would affect the bot's ability to crawl the site. Also, if people are linking to those URLs (highly unlikely though) you may consider a robots meta noindex,follow tag instead so the pagerank could flow to other pages.
And I'm not entirely sure the code you provided above will work if the blocked parameter is the first one in the string (e.g. domain.com/category/?color=red) as there is the additional wildcard between the ? and the parameter. I would advise testing this in Google Webmaster Tools first.
- On the Webmaster Tools Home page, click the site you want.
- Under Crawl, click Blocked URLs.
- If it's not already selected, click the Test robots.txt tab.
- Copy the content of your robots.txt file, and paste it into the first box.
- In the URLs box, list the site to test against.
- In the User-agents list, select the user-agents you want (e.g. Googlebot)
-
There certainly is a right answer to my question - I already posted it here earlier today:
Disallow: /*?color=
Disallow: /?*manufacturer=Without the $ at the end which would otherwise denote the end of the URL.
-
Hello Adrian,
The Moz reports are meant to help you uncover issues like this. If you're seeing non-canonical URLs in the Moz report then there is a potential issue for Google, Bing and other search engines as well.
Google does respect wildcards (*) in the robots.txt file, though it can easily be done wrong. There is not right or wrong answer to the issue of using filters or faceted navigation, as each circumstance is going to be different. However, I hope some of these articles will help you identify the best approach for your needs:
(Note: Faceted Navigation is not exactly the same as category filters, but the issues and possible solutions are very similar
)Building Faceted Navigation That Doesn't Suck Faceted Navigation Whiteboard Friday
Duplicate Content: Block, Redirect or Canonical
Guide to eCommerce Facets, Filters and Categories
Rel Canonical How To and Why Not
Moz.com Guide to Duplicate ContentI don't know how your store handles these (e.g. does it add the filter automatically, or only when a user selects a filter?) so I can't give you the answer, but I promise if you read those articles above you will have a very good understanding of all of the options so you can choose which is best for you. That might end up being as simple as blocking the filters in your robots.txt file, or you may opt for rel canonical, noindex meta tag, ajax, Google parameter handling, etc...
Good luck!
-
It's not Google's index that I'm interested in in this case, it's for the MOZ reports. Moz was including over 10,000 'pages' because it was indexing these URLs. Now I know how to edit the robots.txt Moz will be prevented from indexing them again (we only have around 2,000 real pages, not 10,000)
-
I sought out the answer from a developer and got the following reply, so posting here in case it helps someone else:
To exclude pages with color or manufacture in them you can use
Disallow: /*?color=
Disallow: /?*manufacturer=A question mark in your try should be omitted as it denotes the end of the url
-
Hi
I would recommend excluding these in Google Webmaster Tools. Once logged in to your account under the "Crawl" menu you will find "URL Parameters". Find the relevant parameter in the list on this page and you can tell Google not to index these pages.
Hope this helps.
Steve
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Wordpress: Why do the URLs of my posts keep changing to match the posts titles?
I'll try to keep this brief. The URLs of my posts keep snapping back to exactly match their post titles, no matter how often I try to change them. e.g. title: How to Tie Your Shoelaces desired URL: tie-shoelaces BUT actual URL: how-to-tie-your-shoelaces Anyone come across/ resolved this issue before?
On-Page Optimization | | GerardAdlum0 -
Redirects for new site new urls?
If redoing a site and updating some of the url's for SEO should you do permanent redirects for the old sites url's ? Using WordPress. I saw that the Yoast Pro plugin allows you to do this inside WordPress , is this the best way? Suggestions.? I know there are old articles written out on the web pointing back to the what will be soon old url's so just wondering what's the best way to go about this. Thanks Scott
On-Page Optimization | | scott3150 -
Changing a page url
I have a page that ranks well (#4) for a good keyword. However, the url has the keyword in it but is misspelled. I would like to change the url to have the correct spelling but do not want to lose the ranking that I have. What is the best and safest way to proceed?
On-Page Optimization | | bhsiao0 -
Changing the url of a page
Hello. I would like to change the url of a page. It currently has very few inbound links. I would set up a 301 redirect to the new url. Is there anything else I should take into account before changing the url? Is there a downside to changing a url? Do inbound links carry the same value when a 301 redirect is involved? Thank you!
On-Page Optimization | | nyc-seo0 -
Brand Name URL Redirecting to Actual URL
So we have already built a site under a parent company's URL: parentcompany.com And now we have their branded product lines in directories: parentcompany.com/brand-name1, and parentcompany.com/brand-name2 We also own the actual URL Brand Name 1 (which is also the exact description of the product): brandname1.com We do not yet own the URL for Brand Name 2 (which is also the exact description of the product): brandname2.com. This is because a squatter is sitting on it and is asking $10,000+ for it. What we are trying to determine is how valuable these brand name URLs are since they will be redirecting and not the actual site's primary domain name. Anybody know how much of an effect owning those and redirecting has on ranking for those brand names that are also very descriptive of the products? Would we be smarter to spend $10,000 on adwords or 10,000 on the domain? Thanks!
On-Page Optimization | | grayloon1 -
Robots.txt file
Does it serve any purpose if we omit robots.txt file ? I wonder if spider has to read all the pages, why do we insert robots.txt file ?
On-Page Optimization | | seoug_20050 -
Meta refresh - nojavascript url
seomox is telling me that I am getting a page that is not being indexed or crawled and since the crawl status code is 200 and there are no robots the meta-refresh url must be the problem. the meta refresh url is different than the on page report card url as it's the nojavascript url which my developer says should be ok. see his comments below. The is redirecting to http://mastermindtoys.com/store/nojavascript.html only in case if the JavaScript is disabled in the client browser. This is the right way to do it, I don’t understand why this might be a problem, otherwise MM has to implement Noscript pages that have a real content. I didn’t get what’s wrong about accessibility. The code 200 means it is accessible, and yes there is nothing to access if JavaScript is disabled on browser. I think there are no modern retail sites that would do any sensible business with the scripting disabled in browsers.The H1 is really present 2 times and second occurrence can be removed, though I highly doubt about importance of this change.Regarding duplicates – what URLs are considered duplicates? Can you please send me examples?I am not aware of canonical URL problem for MM site unless we consider old .asp links as duplicate links of the canonical product pages. I would appreciate if SEOMoz gave us an example what they mean.I suspect that the page is not getting indexed as a result of this or I'm just not getting a good score. Which is it?
On-Page Optimization | | mastermindtoys0 -
How do you see a list of URLs with duplicate page titles?
When looking at the Duplicate Page Title report, the Other URLs column has various numbers that presumably indicate the number of pages that share the same title. When I click on one of these numbers, say a URL that shows 4 in that column, the next page reports "No sample duplicate URLs to report". Why isn't it showing me the other 3 URLs with the same page title?
On-Page Optimization | | jkenyon0