Is Googlebot ignoring directives? Or is it Me?
-
I saw an answer to a question in this forum a few days ago, that said it was a bad idea to use robots.txt to tell googlebot to go away.
That SEO said it was much better to use the META tag to say noindex,nofollow.
So I removed the robots directive and added the META tag
<meta robots='noindex,nofollow'>
Today, I see google showing my send to a friend page where I expected the real page to be.
Does it mean Google is stupid?
Does it mean google ignores the Robots META tag?
Does it mean short pages have more value than long pages?
Does it mean if I convert my whole site to snippets, I'll get more traffic?
Does it mean garbage trumps content?
I have more questions, but this is more than enough.
-
Thank you Ryan.
They completely ignored the meta tags., completely messing up our serps. So I put it back in robots. I wont trust google again to do the right thing.
-
Hi Allan,
It is a best practice to use meta tags to indicate your indexing preference to search engines.
Normally the recommended implementation would be "noindex, follow" but without examining your site it is impossible to know for sure.
Google honors meta tags but there are a number of issues which could be the source of your issue. For example, if you did not use valid syntax the tag may not be honored. If you are blocking the page in robots.txt, then search engines cannot read the tag.
As for the last three questions, the simple answer is quality content is best.
If you can share the URL of the page involved, we can offer a specific response to the implementation of the meta tag.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
301 Re-directing 'empty' domains
Hello, My client had purchased a few domains and 301 re-directed them, pointing to our main website. As far as I am aware the 'empty domains' are brand related but no content has ever been displayed on them, and I doubt they have much authority. The issue here is that we took a dive in ranking for our main keyword, I had a look on ahrefs and found the below: | www.empty-domain/our-keyword | 30 | 19 | 1 | fb 0
Technical SEO | | SO_UK
G+ 0
in 4 | REDIRECT 301 TO www.main-domain/our-keyword | 8 Feb '175 d | The ranking dip happened at the same time as the re-direct was re-discovered / re-crawled. Could the 'empty' URL in question been causing us any issues? I understand that this is terrible practice for 301 redirects, I was hoping someone in the community could shed light on any possible solution for this.0 -
Site not getting indexed by googlebot.
The following question is in regards to http://footeschool.org/. This site is not getting indexed with google(googlebot) This only happens when the user agent is set googlebot. This is a recent issue. We are using DNN as CMS. Are there any suggestion to help resolve this issue?
Technical SEO | | bcmull0 -
Some URLs were not accessible to Googlebot due to an HTTP status error.
Hello I'm a seo newbie and some help from the community here would be greatly appreciated. I have submitted the sitemap of my website in google webmasters tools and now I got this warning: "When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted." How do I fix this? What should I do? Many thanks in advance.
Technical SEO | | GoldenRanking140 -
How long after disallowing Googlebot from crawling a domain until those pages drop out of their index?
We recently had Google crawl a version of the site we that we had thought we had disallowed already. We have corrected the issue of them crawling the site, but pages from that version are still appearing in the search results (the version we want them to not index and serve up is our .us domain which should have been blocked to them). My question is this: How long should I expect that domain (the .us we don't want to appear) to stay in their index after disallowing their bot? Is this a matter of days, weeks, or months?
Technical SEO | | TLM0 -
Trackback URLs & temporary re-directs
Hi Community, I have receiving an increasing number of temporary re-direct status codes via MOZ crawl diagnostics. I have taken a look more closely and these URLs are 'trackback urls' from blog posts, the website is Wordpress integrated. What is best practice for these 302 temporary redirects? I have read that best practice for a 302 is to 301 re-direct a URL, but is this the case for a trackback URL?
Technical SEO | | SO_UK0 -
Can you redirect from a 410 server error? I see many 410s that should be directed to an existing page.
We have 150,000 410 server errors. Many of them should be redirected to an existing url. This is a result of a complete website redesign, including new navigation and new web platform. I believe IT may have inadvertently marked many 404s as 410s. Can I fix this or is a 410 error permanent? Thank you for your help.
Technical SEO | | sxsoule0 -
Directing traffic to subdomain
Hi everyone, For this question, please note that we will be directing traffic using a load balancer (an Amazon ELB, to be specific) rather than using a 301 redirect. The question: Will the SEO ranking of links to pages be negatively impacted by directing traffic to servers with a different hostname (or subdomain) within mycompany.com? For example, we would like to have www.mycompany.com load balanced between host1.mycompany.com and host2.mycompany.com. Many thanks for your input! Jay
Technical SEO | | SeoExpansion0