Robots
-
I have just noticed this in my code
name="robots" content="noindex">
And have noticed some of my keywords have dropped, could this be the reason?
-
It was everypage on the site.
I also noticed the pages that are not indexed no longer, they have no PR, is that expected?
-
Was the homepage one of the pages that included the noindex meta tag?
Even if it was, pages will not all be crawled at the same time or in a particular order. The homepage may have already been crawled before the change was made on your site, your homepage may not have even be crawled at all today if it was visited yesterday for example.
Crawling results can vary hugely based on a number of factors.
-
The only thing that does not make sense to me is if the sitemap was processes today, why is the homepage still indexed?
-
Yes because that is what caused them to take notice of the meta noindex and drop your pages from their search results.
Best of luck with it, feel free to send me a PM if your pages haven't reappeared in Google's search engine over the next few days.
-
Oh! I also noticed that in Webmaster tools that the sitemap was processed today, does that mean Googlebot has visited the website today?
-
Thanks Geoff, will do what you recommended.
I noticed in Google webmaster tools this:
Blocked URLs - 193
Downloaded - 13 hours ago
Status - 200 (success)
-
Hi Gary,
If the pages dropped from Google's index that quickly, then chances are, they will be back again almost as quick. If your website has an XML sitemap, you could try pinging this to the search engines to alert them to revisit your site as soon as possible again.
It's bad luck that the meta tag was inserted and this caused immediate negative effects, but it will be recoverable, and likely your pages should re-enter the index at the same positions as they were prior to today.
The key is to just bring Google's bot back to your website as soon as possible to recrawl, publishing a blog post could do this, creating a backlink from a high traffic site (a forum is a good example for this) are some methods of encouraging this.
Hope that helps.
-
Hi Geoff,
The developer had said it got added this morning when we rolled out a discount feature on our website, I think it was the CMS adding it automatically, however now a lot of the keywords that were ranking top 3 are no longer indexed, is it just bad luck? will Google come back?
-
If you are using a content management system, these additional meta tags can often be controlled within your administration panel.
If the meta tag is hard coded into your website header, this will appearing on every page of your website and will subsequently result in you not having any pages indexed in search engines.
As Ben points out, the noindex directive instructs search engine robots not to index that particular page. It would recommended to address this issue as quickly as possible, especially if you have a high traffic website that is getting crawled frequently.
-
Thanks for your quick reply Ben.
It does not seem to be all my pages that have fallen off, just some, the developer said that it only got added this morning by mistake.
I actually typed in the full URL into Google and it does not appear anymore, I was ranked no.2 for that particular keyword, receiving about 150 click per day, not happy!
-
Actually on second thoughts - YES. Yes it probably is the reason your terms are dropping.
-
Could be.
That's a directive that tells search engines no to include that page in their indexes.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Twitter Robots.TXT
Hello Moz World, So, I trying to wrap my head around all of the different robots.txt. I decided to dive into a site like Twitter, and look at their robot text. And now, I'm super confused. What are they telling the search engines with /hasttag/*src=. Why don't they just use: Useragent: * Disallow: But, they address each search engine. Is there any benefit to this? Thanks for all of the awesome responses!!! B/R Will H.
Intermediate & Advanced SEO | | MarketingChimp100 -
Robots.txt for Facet Results
Hi Does anyone know how to properly add facets URL's to Robots txt? E.g. of our facets URL - http://www.key.co.uk/en/key/platform-trolleys-trucks#facet:-10028265807368&productBeginIndex:0&orderBy:5&pageView:list& Everything after the # will need to be blocked on all pages with a facet. Thank you
Intermediate & Advanced SEO | | BeckyKey0 -
Baidu Spider appearing on robots.txt
Hi, I'm not too sure what to do about this or what to think of it. This magically appeared in my companies robots.txt file (literally magically appeared/text is below) User-agent: Baiduspider
Intermediate & Advanced SEO | | IceIcebaby
User-agent: Baiduspider-video
User-agent: Baiduspider-image
Disallow: / I know that Baidu is the Google of China, but I'm not sure why this would appear in our robots.txt all of a sudden. Should I be worried about a hack? Also, would I want to disallow Baidu from crawling my companies website? Thanks for your help,
-Reed0 -
How to handle a blog subdomain on the main sitemap and robots file?
Hi, I have some confusion about how our blog subdomain is handled in our sitemap. We have our main website, example.com, and our blog, blog.example.com. Should we list the blog subdomain URL in our main sitemap? In other words, is listing a subdomain allowed in the root sitemap? What does the final structure look like in terms of the sitemap and robots file? Specifically: **example.com/sitemap.xml ** would I include a link to our blog subdomain (blog.example.com)? example.com/robots.xml would I include a link to BOTH our main sitemap and blog sitemap? blog.example.com/sitemap.xml would I include a link to our main website URL (even though it's not a subdomain)? blog.example.com/robots.xml does a subdomain need its own robots file? I'm a technical SEO and understand the mechanics of much of on-page SEO.... but for some reason I never found an answer to this specific question and I am wondering how the pros do it. I appreciate your help with this.
Intermediate & Advanced SEO | | seo.owl0 -
Blocking out specific URLs with robots.txt
I've been trying to block out a few URLs using robots.txt, but I can't seem to get the specific one I'm trying to block. Here is an example. I'm trying to block something.com/cats but not block something.com/cats-and-dogs It seems if it setup my robots.txt as so.. Disallow: /cats It's blocking both urls. When I crawl the site with screaming flog, that Disallow is causing both urls to be blocked. How can I set up my robots.txt to specifically block /cats? I thought it was by doing it the way I was, but that doesn't seem to solve it. Any help is much appreciated, thanks in advance.
Intermediate & Advanced SEO | | Whebb0 -
Effect duration of robots.txt file.
in my web site there is demo site in that also, index in Google but no need it now.so i have created robots file and upload to server yesterday.in the demo folder there are some html files,and i wanna remove all these in demo file from Google.but still in web master tools it showing User-agent: *
Intermediate & Advanced SEO | | innofidelity
Disallow: /demo/ How long this will take to remove from Google ? And are there any alternative way doing that ?0 -
Should I robots block this directory?
There's about 43k pages indexed in this directory, and while helpful to end users, I don't see it being a great source of unique content for search engines. Would you robots block or meta noindex nofollow these pages in the /blissindex/ directory? ie. http://www.careerbliss.com/blissindex/petsmart-index-980481/ http://www.careerbliss.com/blissindex/att-index-1043730/ http://www.careerbliss.com/blissindex/facebook-index-996632/
Intermediate & Advanced SEO | | CareerBliss0 -
Blocking Dynamic URLs with Robots.txt
Background: My e-commerce site uses a lot of layered navigation and sorting links. While this is great for users, it ends up in a lot of URL variations of the same page being crawled by Google. For example, a standard category page: www.mysite.com/widgets.html ...which uses a "Price" layered navigation sidebar to filter products based on price also produces the following URLs which link to the same page: http://www.mysite.com/widgets.html?price=1%2C250 http://www.mysite.com/widgets.html?price=2%2C250 http://www.mysite.com/widgets.html?price=3%2C250 As there are literally thousands of these URL variations being indexed, so I'd like to use Robots.txt to disallow these variations. Question: Is this a wise thing to do? Or does Google take into account layered navigation links by default, and I don't need to worry. To implement, I was going to do the following in Robots.txt: User-agent: * Disallow: /*? Disallow: /*= ....which would prevent any dynamic URL with a '?" or '=' from being indexed. Is there a better way to do this, or is this a good solution? Thank you!
Intermediate & Advanced SEO | | AndrewY1