Best use of robots.txt for "garbage" links from Joomla!
-
I recently started out on Seomoz and is trying to make some cleanup according to the campaign report i received.
One of my biggest gripes is the point of "Dublicate Page Content".
Right now im having over 200 pages with dublicate page content.
Now.. This is triggerede because Seomoz have snagged up auto generated links from my site.
My site has a "send to freind" feature, and every time someone wants to send a article or a product to a friend via email a pop-up appears.
Now it seems like the pop-up pages has been snagged by the seomoz spider,however these pages is something i would never want to index in Google.
So i just want to get rid of them.
Now to my question
I guess the best solution is to make a general rule via robots.txt, so that these pages is not indexed and considered by google at all.
But, how do i do this? what should my syntax be?
A lof of the links looks like this, but has different id numbers according to the product that is being send:
http://mywebshop.dk/index.php?option=com_redshop&view=send_friend&pid=39&tmpl=component&Itemid=167
I guess i need a rule that grabs the following and makes google ignore links that contains this:
view=send_friend
-
Hi Henrik,
It can take up to a week for SEOmoz crawlers to process your site, which may be an issue if you recently added the tag. Did you remember to include all user agents in your first line?
User-agent: *
Be sure to test your robots.txt file in Google Webmaster Tools to ensure everything is correct.
Couple of other things you can do:
1. Add a rel="nofollow" on your send to friend links.
2. Add a meta robots "noindex" to the head of the popup html.
3. And/or add a canonical tag to the popup. Since I don't have a working example, I don't know what to canonical it too (whatever content it is duplicating) but this is also an option.
-
I just tried to add
Disallow: /view=send_friend
I removed the last /
however a crawl gave me the dublicate content problem again.
Is my syntax wrong?
-
The second one "Disallow: /*view=send_friend" will prevent googlebot from crawling any url with that string in it. So that should take care of your problem.
-
So my link example would look like this in robots.txt?
Disallow: /index.php?option=com_redshop&view=send_friend&pid=&tmpl=component&Itemid=
Or
Disallow: /view=send_friend/
-
Your right I would disallow via robots.txt & a wildcard (*) wherever a unique item id # could be generated.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using one robots.txt for two websites
I have two websites that are hosted in the same CMS. Rather than having two separate robots.txt files (one for each domain), my web agency has created one which lists the sitemaps for both websites, like this: User-agent: * Disallow: Sitemap: https://www.siteA.org/sitemap Sitemap: https://www.siteB.com/sitemap Is this ok? I thought you needed one robots.txt per website which provides the URL for the sitemap. Will having both sitemap URLs listed in one robots.txt confuse the search engines?
Technical SEO | | ciehmoz0 -
Target="_blank"
Do href links that leave a site and use target="_blank" to open a new tab impact SEO?
Technical SEO | | ChristopherGlaeser0 -
The use of robots.txt
Could someone please confirm that if I do not want to block any pages from my URL, then I do not need a robots.txt file on my site? Thanks
Technical SEO | | ICON_Malta0 -
What if I point my canonicals to a URL version that is not used in internal links
My web developer has pointed the "good" URLs that I use in my internal link structure (top-nav/footer) to another duplicate version of my pages. Now the URLs that receive all the canonical link value are not the ones I use on my website. is this a problem and why??? In theory the implementation is good because both have equal content. But does it harm my link equity if it directs to a URL which is not included in my internal link architecture.
Technical SEO | | DeptAgency0 -
Robots.txt checker
Google seems to have discontinued their robots.txt checker. Is there another tool that I can use to check my text instead? Thanks!
Technical SEO | | theLotter0 -
Block or remove pages using a robots.txt
I want to use robots.txt to prevent googlebot access the specific folder on the server, Please tell me if the syntax below is correct User-Agent: Googlebot Disallow: /folder/ I want to use robots.txt to prevent google image index the images of my website , Please tell me if the syntax below is correct User-agent: Googlebot-Image Disallow: /
Technical SEO | | semer0 -
How many times robots.txt gets visited by crawlers, especially Google?
Hi, Do you know if there's any way to track how often robots.txt file has been crawled? I know we can check when is the latest downloaded from webmaster tool, but I actually want to know if they download every time crawlers visit any page on the site (e.g. hundreds of thousands of times every day), or less. thanks...
Technical SEO | | linklater0 -
Best strategy for category filtering links eg by colour
Hi All, I hope you can help with some basic on page seo questions! I have an ecommerce site which allows users to filter/restrict the view of a category by one or more colours. This is done by appending a querystring value to the url ie to view blue, green and purple widgets the link might be: www.example.com/my-widgets-category/?colors=123,92,64 On each category page is a group of coloured boxes with links to filter by that colour, (only if there are available coloured widgets in that category). Each category has rel=canonical set to be the appropriate unfiltered category url ie: www.example.com/my-widgets-category/ I used to have these colour filter links all nofollowed- but am not sure that this is a good idea. So my questions are: 1/ what are the implications of these colour links that can generate a lot of different urls (as you can keep on adding colours to the filter) and how can i enure that i am not shooting myself in the foot- my customers love it! 2/ I also have page=1 etc appended for paging through results- the canonical url is set in all instnaces to be the plain category page as above- do i need to add the rel=prev and re=next? 3/ all of these links can really bump up my total page link count- at the moment i have colour filtering boxes in my main menu drop downs so that users can filter all the products that exists in all of the nested child categories of top level categories by colour. Should i remove these to reduce my total link count, nofollow them or leave as is? Its a great site feature for users- i just don't want to be shooting myself in the foot unecessarily. Thanks!
Technical SEO | | blessig0