Empty Meta Robots Directive - Harmful?
-
Hi,
We had a coding update and a side-effect of that was that our directive was emptied, in other words it now reads as:
on all of the site.
I've since noticed that Google's cache date on all of the pages - at least, the ones I tested - have a Cached date of no later than 17 December '12 - that's the Monday after the directive was removed on mass.
So, A, does anyone have solid evidence of an empty directive causing problems? Past experience, Matt Cutts, Fishkin quote, etc.
And then B - It seems fairly well correlated but, does my entire site's homogenous Cached date point to this tag removal? Or is it fairly normal to have a particular cache date across a large site (we're a large ecommerce site).
Our site: http://www.zando.co.za/
I'm having the directive reinstated as soon as Dev permitting.
And then, for extra credit, is there a way with Google's API, or perhaps some other tool, to run an arbitrary list and retrieve Cached dates? I'd want to do this for diagnosis purposes and preferably in a way that OK with Google. I'd avoid CURLing for the cached URL and scraping out that dates with BASH, or any such kind of thing.
Cheers,
-
Can't answer the API question I'm afraid.
However on the other bits - if you don't specify robots directive, search engines are likely to behave in the default manner - i.e. index, follow unless you're blocking them another way (i.e. robots.txt)
A good test of this would be if you've launched a page since the 17th and it's not in Google's index and you know you've been crawled.
Check in GWT for your crawl data - and don't worry about the cache because your users will always be taken to the current version of your site. It's only a concern if you're no longer being crawled.
If it's an ecommerce site, then it should just be one site-wide tweak to put index,follow back in. Re-create and re-submit your sitemap.xml to GWT then Google will go after all your new content as well - i.e. it hurries up re-crawling.
Hoping something helped you there
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How Often Can I Change My Meta Titles? (Product Discounts)
Hello, I have an ecommerce website. Due to discounts and sales on most of my products, the discount rates change very often. Some products have %10 discounts for 3 days and then same products might have %50 discounts for another 5 days. I would like to show these product specific discounts on Meta Titles and Descriptions dynamically. My system will automatically update the Meta Titles once the discount on a product changes. My question is, how often can I change the Meta Titles? Is changing them (only discount rates = 1word) too often bad for my SEO? What is Goolge's approach on this? Thank you in advance for all your help. Best,
Technical SEO | | yigitgok
Yigit0 -
Weird Blog tags and re-directs
Hello fellow Digital Marketeers! As an in-house kinda guy, I rarely get to audit sites other than my own. But, I was tasked with auditing another. So I ran it through Screaming Frog and the usual tools. I got a couple of URLs come back with timeout messages, so I checked them manually- they're apparently part of a blog's archive: http://www.bestpracticegroup.com/tag/training-2/ I click 'read more' and it takes you to: http://www.bestpracticegroup.com/pfi-contracts-3-myth-busters-to-help-achieve-savings/ The first URL seems entirely redundant. Has anyone else seen something like this? Just an explanation as to why something like that would exist, and how you'd handle that would be grand! Much appreciated, John.
Technical SEO | | Muhammad-Isap0 -
Multi-domain content and meta data feed
Hi, I am working with a client whose web developer has offered to build a CMS that auto-feeds meta-data and product descriptions (on-page content) to two different websites which have two completely different URL's (primary domain names) associated with them. Please see screenshots attached for examples. The entire reason this has been offered is to avoid duplicate content issues. The client has two E-Commerce websites but only one content management system that can update both simultaneously. The work-around shown in the screenshots is the developers attempt at ensuring that both sites have unique meta data and on-page content associated with each product. Can anyone advise whether they foresee that this may cause any issues from an SEO perspective. Thanks in advance wM3ngsj.png KtBun98.png
Technical SEO | | SteveK640 -
Empty Google cached pages.
My little startup Voyage has a tough relationship with Google. I have been reading SEOMOZ/MOZ for years. I am no pro but I understand the basics pretty well. I would like to know why all pages on my main domain look empty in google cache. Here is one example. Other advice is welcome too. I know a lot of my metas and my markup is bad but I am working on it!
Technical SEO | | vincentgagne0 -
Meta-robots Nofollow on logins and admins
In my SEO MOZ reports I am getting over 400 errors as Meta-robots Nofollow. These are all leading to my admin login page which I do not want robots in. Should I put some code on these pages so the robots know this and don't attempt to and I do not get these errors in my reports?
Technical SEO | | Endora0 -
Removing robots.txt on WordPress site problem
Hi..am a little confused since I ticked the box in WordPress to allow search engines to now crawl my site (previously asked for them not to) but Google webmaster tools is telling me I still have robots.txt blocking them so am unable to submit the sitemap. Checked source code and the robots instruction has gone so a little lost. Any ideas please?
Technical SEO | | Wallander0 -
Should search pages be disallowed in robots.txt?
The SEOmoz crawler picks up "search" pages on a site as having duplicate page titles, which of course they do. Does that mean I should put a "Disallow: /search" tag in my robots.txt? When I put the URL's into Google, they aren't coming up in any SERPS, so I would assume everything's ok. I try to abide by the SEOmoz crawl errors as much as possible, that's why I'm asking. Any thoughts would be helpful. Thanks!
Technical SEO | | MichaelWeisbaum0 -
Meta tag "noindex,nofollow" by accident
Hi, 3 weeks ago I wanted to release a new website (made in WordPress), so I neatly created 301 redirects for all files and folders of my old html website and transferred the WordPress site into the index folder. Job well done I thought, but after a few days, my site suddenly disappeared from google. I read in other Q&A's that this could happen so I waited a little longer till I finally saw today that there was a meta robots added on every page with "noindex, nofollow". For some reason, the WordPress setting "I want to forbid search engines, but allow normal visitors to my website" was selected, although I never even opened that section called "Privacy". So my question is, will this have a negative impact on my pagerank afterwards? Thanks, Sven
Technical SEO | | Zitana0