Robots
-
I have just noticed this in my code
name="robots" content="noindex">
And have noticed some of my keywords have dropped, could this be the reason?
-
It was everypage on the site.
I also noticed the pages that are not indexed no longer, they have no PR, is that expected?
-
Was the homepage one of the pages that included the noindex meta tag?
Even if it was, pages will not all be crawled at the same time or in a particular order. The homepage may have already been crawled before the change was made on your site, your homepage may not have even be crawled at all today if it was visited yesterday for example.
Crawling results can vary hugely based on a number of factors.
-
The only thing that does not make sense to me is if the sitemap was processes today, why is the homepage still indexed?
-
Yes because that is what caused them to take notice of the meta noindex and drop your pages from their search results.
Best of luck with it, feel free to send me a PM if your pages haven't reappeared in Google's search engine over the next few days.
-
Oh! I also noticed that in Webmaster tools that the sitemap was processed today, does that mean Googlebot has visited the website today?
-
Thanks Geoff, will do what you recommended.
I noticed in Google webmaster tools this:
Blocked URLs - 193
Downloaded - 13 hours ago
Status - 200 (success)
-
Hi Gary,
If the pages dropped from Google's index that quickly, then chances are, they will be back again almost as quick. If your website has an XML sitemap, you could try pinging this to the search engines to alert them to revisit your site as soon as possible again.
It's bad luck that the meta tag was inserted and this caused immediate negative effects, but it will be recoverable, and likely your pages should re-enter the index at the same positions as they were prior to today.
The key is to just bring Google's bot back to your website as soon as possible to recrawl, publishing a blog post could do this, creating a backlink from a high traffic site (a forum is a good example for this) are some methods of encouraging this.
Hope that helps.
-
Hi Geoff,
The developer had said it got added this morning when we rolled out a discount feature on our website, I think it was the CMS adding it automatically, however now a lot of the keywords that were ranking top 3 are no longer indexed, is it just bad luck? will Google come back?
-
If you are using a content management system, these additional meta tags can often be controlled within your administration panel.
If the meta tag is hard coded into your website header, this will appearing on every page of your website and will subsequently result in you not having any pages indexed in search engines.
As Ben points out, the noindex directive instructs search engine robots not to index that particular page. It would recommended to address this issue as quickly as possible, especially if you have a high traffic website that is getting crawled frequently.
-
Thanks for your quick reply Ben.
It does not seem to be all my pages that have fallen off, just some, the developer said that it only got added this morning by mistake.
I actually typed in the full URL into Google and it does not appear anymore, I was ranked no.2 for that particular keyword, receiving about 150 click per day, not happy!
-
Actually on second thoughts - YES. Yes it probably is the reason your terms are dropping.
-
Could be.
That's a directive that tells search engines no to include that page in their indexes.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What happens to crawled URLs subsequently blocked by robots.txt?
We have a very large store with 278,146 individual product pages. Since these are all various sizes and packaging quantities of less than 200 product categories my feeling is that Google would be better off making sure our category pages are indexed. I would like to block all product pages via robots.txt until we are sure all category pages are indexed, then unblock them. Our product pages rarely change, no ratings or product reviews so there is little reason for a search engine to revisit a product page. The sales team is afraid blocking a previously indexed product page will result in in it being removed from the Google index and would prefer to submit the categories by hand, 10 per day via requested crawling. Which is the better practice?
Intermediate & Advanced SEO | | AspenFasteners1 -
Default Robots.txt in WordPress - Should i change it??
I have a WordPress site as using theme Genesis i am using default robots.txt. that has a line Allow: /wp-admin/admin-ajax.php, is it okay or any problem. Should i change it?
Intermediate & Advanced SEO | | rootwaysinc0 -
Robots.txt question
I notice something weird in Google robots. txt tester I have this line Disallow: display= in my robots.text but whatever URL I give to test it says blocked and shows this line in robots.text for example this line is to block pages like http://www.abc.com/lamps/floorlamps?display=table but if I test http://www.abc.com/lamps/floorlamps or any page it shows as blocked due to Disallow: display= am I doing something wrong or Google is just acting strange? I don't think pages with no display= are blocked in real.
Intermediate & Advanced SEO | | rbai0 -
Block subdomain directory in robots.txt
Instead of block an entire sub-domain (fr.sitegeek.com) with robots.txt, we like to block one directory (fr.sitegeek.com/blog).
Intermediate & Advanced SEO | | gamesecure
'fr.sitegeek.com/blog' and 'wwww.sitegeek.com/blog' contain the same articles in one language only labels are changed for 'fr' version and we suppose that duplicate content cause problem for SEO. We would like to crawl and index 'www.sitegee.com/blog' articles not 'fr.sitegeek.com/blog'. so, suggest us how to block single sub-domain directory (fr.sitegeek.com/blog) with robot.txt? This is only for blog directory of 'fr' version even all other directories or pages would be crawled and indexed for 'fr' version. Thanks,
Rajiv0 -
Robots.txt, does it need preceding directory structure?
Do you need the entire preceding path in robots.txt for it to match? e.g: I know if i add Disallow: /fish to robots.txt it will block /fish
Intermediate & Advanced SEO | | Milian
/fish.html
/fish/salmon.html
/fishheads
/fishheads/yummy.html
/fish.php?id=anything But would it block?: en/fish
en/fish.html
en/fish/salmon.html
en/fishheads
en/fishheads/yummy.html
**en/fish.php?id=anything (taken from Robots.txt Specifications)** I'm hoping it actually wont match, that way writing this particular robots.txt will be much easier! As basically I'm wanting to block many URL that have BTS- in such as: http://www.example.com/BTS-something
http://www.example.com/BTS-somethingelse
http://www.example.com/BTS-thingybob But have other pages that I do not want blocked, in subfolders that also have BTS- in, such as: http://www.example.com/somesubfolder/BTS-thingy
http://www.example.com/anothersubfolder/BTS-otherthingy Thanks for listening0 -
If i disallow unfriendly URL via robots.txt, will its friendly counterpart still be indexed?
Our not-so-lovely CMS loves to render pages regardless of the URL structure, just as long as the page name itself is correct. For example, it will render the following as the same page: example.com/123.html example.com/dumb/123.html example.com/really/dumb/duplicative/URL/123.html To help combat this, we are creating mod rewrites with friendly urls, so all of the above would simply render as example.com/123 I understand robots.txt respects the wildcard (*), so I was considering adding this to our robots.txt: Disallow: */123.html If I move forward, will this block all of the potential permutations of the directories preceding 123.html yet not block our friendly example.com/123? Oh, and yes, we do use the canonical tag religiously - we're just mucking with the robots.txt as an added safety net.
Intermediate & Advanced SEO | | mrwestern0 -
Can you use more than one meta robots tag per page?
If you want to add both "noindex, follow" and "noopd" should you add two meta robots tags or is there a way to combine both into one?
Intermediate & Advanced SEO | | nicole.healthline0 -
Blocking Dynamic URLs with Robots.txt
Background: My e-commerce site uses a lot of layered navigation and sorting links. While this is great for users, it ends up in a lot of URL variations of the same page being crawled by Google. For example, a standard category page: www.mysite.com/widgets.html ...which uses a "Price" layered navigation sidebar to filter products based on price also produces the following URLs which link to the same page: http://www.mysite.com/widgets.html?price=1%2C250 http://www.mysite.com/widgets.html?price=2%2C250 http://www.mysite.com/widgets.html?price=3%2C250 As there are literally thousands of these URL variations being indexed, so I'd like to use Robots.txt to disallow these variations. Question: Is this a wise thing to do? Or does Google take into account layered navigation links by default, and I don't need to worry. To implement, I was going to do the following in Robots.txt: User-agent: * Disallow: /*? Disallow: /*= ....which would prevent any dynamic URL with a '?" or '=' from being indexed. Is there a better way to do this, or is this a good solution? Thank you!
Intermediate & Advanced SEO | | AndrewY1