Robots
-
I have just noticed this in my code
name="robots" content="noindex">
And have noticed some of my keywords have dropped, could this be the reason?
-
It was everypage on the site.
I also noticed the pages that are not indexed no longer, they have no PR, is that expected?
-
Was the homepage one of the pages that included the noindex meta tag?
Even if it was, pages will not all be crawled at the same time or in a particular order. The homepage may have already been crawled before the change was made on your site, your homepage may not have even be crawled at all today if it was visited yesterday for example.
Crawling results can vary hugely based on a number of factors.
-
The only thing that does not make sense to me is if the sitemap was processes today, why is the homepage still indexed?
-
Yes because that is what caused them to take notice of the meta noindex and drop your pages from their search results.
Best of luck with it, feel free to send me a PM if your pages haven't reappeared in Google's search engine over the next few days.
-
Oh! I also noticed that in Webmaster tools that the sitemap was processed today, does that mean Googlebot has visited the website today?
-
Thanks Geoff, will do what you recommended.
I noticed in Google webmaster tools this:
Blocked URLs - 193
Downloaded - 13 hours ago
Status - 200 (success)
-
Hi Gary,
If the pages dropped from Google's index that quickly, then chances are, they will be back again almost as quick. If your website has an XML sitemap, you could try pinging this to the search engines to alert them to revisit your site as soon as possible again.
It's bad luck that the meta tag was inserted and this caused immediate negative effects, but it will be recoverable, and likely your pages should re-enter the index at the same positions as they were prior to today.
The key is to just bring Google's bot back to your website as soon as possible to recrawl, publishing a blog post could do this, creating a backlink from a high traffic site (a forum is a good example for this) are some methods of encouraging this.
Hope that helps.
-
Hi Geoff,
The developer had said it got added this morning when we rolled out a discount feature on our website, I think it was the CMS adding it automatically, however now a lot of the keywords that were ranking top 3 are no longer indexed, is it just bad luck? will Google come back?
-
If you are using a content management system, these additional meta tags can often be controlled within your administration panel.
If the meta tag is hard coded into your website header, this will appearing on every page of your website and will subsequently result in you not having any pages indexed in search engines.
As Ben points out, the noindex directive instructs search engine robots not to index that particular page. It would recommended to address this issue as quickly as possible, especially if you have a high traffic website that is getting crawled frequently.
-
Thanks for your quick reply Ben.
It does not seem to be all my pages that have fallen off, just some, the developer said that it only got added this morning by mistake.
I actually typed in the full URL into Google and it does not appear anymore, I was ranked no.2 for that particular keyword, receiving about 150 click per day, not happy!
-
Actually on second thoughts - YES. Yes it probably is the reason your terms are dropping.
-
Could be.
That's a directive that tells search engines no to include that page in their indexes.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block session id URLs with robots.txt
Hi, I would like to block all URLs with the parameter '?filter=' from being crawled by including them in the robots.txt. Which directive should I use: User-agent: *
Intermediate & Advanced SEO | | Mat_C
Disallow: ?filter= or User-agent: *
Disallow: /?filter= In other words, is the forward slash in the beginning of the disallow directive necessary? Thanks!1 -
Robots.txt - Googlebot - Allow... what's it for?
Hello - I just came across this in robots.txt for the first time, and was wondering why it is used? Why would you have to proactively tell Googlebot to crawl JS/CSS and why would you want it to? Any help would be much appreciated - thanks, Luke User-Agent: Googlebot Allow: /.js Allow: /.css
Intermediate & Advanced SEO | | McTaggart0 -
Should I disallow all URL query strings/parameters in Robots.txt?
Webmaster Tools correctly identifies the query strings/parameters used in my URLs, but still reports duplicate title tags and meta descriptions for the original URL and the versions with parameters. For example, Webmaster Tools would report duplicates for the following URLs, despite it correctly identifying the "cat_id" and "kw" parameters: /Mulligan-Practitioner-CD-ROM
Intermediate & Advanced SEO | | jmorehouse
/Mulligan-Practitioner-CD-ROM?cat_id=87
/Mulligan-Practitioner-CD-ROM?kw=CROM Additionally, theses pages have self-referential canonical tags, so I would think I'd be covered, but I recently read that another Mozzer saw a great improvement after disallowing all query/parameter URLs, despite Webmaster Tools not reporting any errors. As I see it, I have two options: Manually tell Google that these parameters have no effect on page content via the URL Parameters section in Webmaster Tools (in case Google is unable to automatically detect this, and I am being penalized as a result). Add "Disallow: *?" to hide all query/parameter URLs from Google. My concern here is that most backlinks include the parameters, and in some cases these parameter URLs outrank the original. Any thoughts?0 -
Robot.txt File Not Appearing, but seems to be working?
Hi Mozzers, I am conducting a site audit for a client, and I am confused with what they are doing with their robot.txt file. It shows in GWT that there is a file and it is blocking about 12K URLs (image attached). It also shows in GWT that the file was downloaded 10 hours ago successfully. However, when I go to the robot.txt file link, the page is blank. Would they be doing something advanced to be blocking URLs to hide it it from users? It appears to correctly be blocking log-ins, but I would like to know for sure that it is working correctly. Any advice on this would be most appreciated. Thanks! Jared ihgNxN7
Intermediate & Advanced SEO | | J-Banz0 -
Robots.txt Blocked Most Site URLs Because of Canonical
Had a bit of a "Gotcha" in Magento. We had Yoast Canonical Links extension which worked well , but then we installed Mageworx SEO Suite.. which broke Canonical Links. Unfortunately it started putting www.mysite.com/catalog/product/view/id/516/ as the Canonical Link - and all URLs with /catalog/productview/* is blocked in Robots.txt So unfortunately We told Google that the correct page is also a blocked page. they haven't been removed as far as I can see but traffic has certainly dropped. We have also , at the same time had some Site changes grouping some pages & having 301 redirects. Resubmitted site map & did a fetch as google. Any other ideas? And Idea how long it will take to become unblocked?
Intermediate & Advanced SEO | | s_EOgi_Bear0 -
Should all pages on a site be included in either your sitemap or robots.txt?
I don't have any specific scenario here but just curious as I come across sites fairly often that have, for example, 20,000 pages but only 1,000 in their sitemap. If they only think 1,000 of their URL's are ones that they want included in their sitemap and indexed, should the others be excluded using robots.txt or a page level exclusion? Is there a point to having pages that are included in neither and leaving it up to Google to decide?
Intermediate & Advanced SEO | | RossFruin1 -
Using 2 wildcards in the robots.txt file
I have a URL string which I don't want to be indexed. it includes the characters _Q1 ni the middle of the string. So in the robots.txt can I use 2 wildcards in the string to take out all of the URLs with that in it? So something like /_Q1. Will that pickup and block every URL with those characters in the string? Also, this is not directly of the root, but in a secondary directory, so .com/.../_Q1. So do I have to format the robots.txt as //_Q1* as it will be in the second folder or just using /_Q1 will pickup everything no matter what folder it is on? Thanks.
Intermediate & Advanced SEO | | seo1234560 -
Robots.txt is blocking Wordpress Pages from Googlebot?
I have a robots.txt file on my server, which I did not develop, it was done by the web designer at the company before me. Then there is a word press plugin that generates a robots.txt file. How Do I unblock all the wordpress pages from googlebot?
Intermediate & Advanced SEO | | ENSO0