Meta robots or robot.txt file?
-
Hi Mozzers!
For parametric URL's would you recommend meta robot or robot.txt file?
For example: http://www.exmaple.com//category/product/cat no./quickView I want to stop indexing /quickView URLs.And what's the real difference between the two?
Thanks again!
Kay
-
No problem at all
-Andy
-
Thanks Andy!!!
-
Hi Kay,
If you want to disallow access to a page, then add the following to the Robots.txt file:
Disallow: /quickView
Then test this in Webmaster Tools.
If you want to tell Google not to index a page, then you need to do this at the page level using Meta Robots. However, don't do both (at least not at the same time). If you disallow access to a set of pages via Robots.txt and then at a later stage you Meta Noindex, Google won't see this because of the Disallow in the Robots.txt.
It really depends what you are trying to achieve, but it sounds like the Meta Robots is the way to go for you.
-Edit... here is an interesting read for you.
-Andy
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Meta-description issue in SERPs for different countries
I'm working with a US client on the SEO for their large ecommerce website, I'm working on it from the UK. We've now optimised several of the pages including updating the meta-descriptions etc. The problem is when I search on the keyword iin the UK I see the new updated version of the meta-description in SERPs results. BUT when my client searches on the same keyword in the US they're see the old version of the meta-description. Does any one have any idea why this is happening and how we can resolve it? Thanks Tanya
Intermediate & Advanced SEO | | TanyaKorteling0 -
Can some one help how to fix spam problem. please see the attached file
Hi have spam link issue in my website at attaching report please help to fix this so my domain can get better. thanks. aEP1vVy
Intermediate & Advanced SEO | | grbassi0 -
Recovering old disallow file?
Hi guys, We had aN SEO agency do a disallow request on one of our sites a while back. They have no trace of the disallow txt file and all the links they disallowed. Does anyone know if there is a way to recover this file in google webmaster tools or anyway to find which links were disallowed? Cheers.
Intermediate & Advanced SEO | | jayoliverwright0 -
72KB CSS code directly in the page header (not in external CSS file). Done for faster "above the fold" loading. Any problem with this?
To optimize for googles page speed, our developer has moved the 72KB CSS code directly in the page header (not in external CCS file). This way the above the fold loading time was reduced. But may this affect indexing of the page or have any other negative side effects on rankings? I made a quick test and google cache seems to have our full pages cached, but may it affect somehow negatively our rankings or that google indexes fewer of our pages (here we have some problems with google ignoring about 30% of our pages in our sitemap".)
Intermediate & Advanced SEO | | lcourse0 -
Robots.txt vs noindex
I recently started working on a site that has thousands of member pages that are currently robots.txt'd out. Most pages of the site have 1 to 6 links to these member pages, accumulating into what I regard as something of link juice cul-d-sac. The pages themselves have little to no unique content or other relevant search play and for other reasons still want them kept out of search. Wouldn't it be better to "noindex, follow" these pages and remove the robots.txt block from this url type? At least that way Google could crawl these pages and pass the link juice on to still other pages vs flushing it into a black hole. BTW, the site is currently dealing with a hit from Panda 4.0 last month. Thanks! Best... Darcy
Intermediate & Advanced SEO | | 945010 -
When you add 10.000 pages that have no real intention to rank in the SERP, should you: "follow,noindex" or disallow the whole directory through robots? What is your opinion?
I just want a second opinion 🙂 The customer don't want to loose any internal linkvalue by vaporizing link value though a big amount of internal links. What would you do?
Intermediate & Advanced SEO | | Zanox0 -
Whole site blocked by robots in webmaster tools
My URL is: www.wheretobuybeauty.com.auThis new site has been re-crawled over last 2 weeks, and in webmaster tools index status the following is displayed:Indexed 50,000 pagesblocked by robots 69,000Search query 'site:wheretobuybeauty.com.au' returns 55,000 pagesHowever, all pages in the site do appear to be blocked and over the 2 weeks, the google search query site traffic declined from significant to zero (proving this is in fact the case ).This is a Linux php site and has the following: 55,000 URLs in sitemap.xml submitted successfully to webmaster toolsrobots.txt file existed but did not have any entries to allow or disallow URLs - today I have removed robots.txt file completely URL re-direction within Linux .htaccess file - there are many rows within this complex set of re-directions. Developer has double checked this file and found that it is valid.I have read everything that google and other sources have on this topic and this does not help. Also checked webmaster crawl errors, crawl stats, malware and there is no problem there related to this issue.Is this a duplicate content issue - this is a price comparison site where approx half the products have duplicate product descriptions - duplicated because they are obtained from the suppliers through an XML data file. The suppliers have the descriptions from the files in their own sites.Help!!
Intermediate & Advanced SEO | | rrogers0 -
Robots.txt disallow subdomain
Hi all, I have a development subdomain, which gets copied to the live domain. Because I don't want this dev domain to get crawled, I'd like to implement a robots.txt for this domain only. The problem is that I don't want this robots.txt to disallow the live domain. Is there a way to create a robots.txt for this development subdomain only? Thanks in advance!
Intermediate & Advanced SEO | | Partouter0