Wordpress error
-
On our Google Webmaster Tools I'm getting a Severe Health Warning regarding our Robot.txt file reading:
User-agent: *
Crawl-delay: 20User-agent: 008
Disallow: /I'm wondering how I can fix this and stop it happening again.
The site was hacked about 4 months ago but I thought we'd managed to clear things up.
Colin
-
This will be my first post on SEOmoz so bear with me
The way I understand it is that robots read the robots.txt file from top to bottom, and once they find a rule that applies to them they stop reading and begin crawling. So basically the robots.txt written as:
User-agent:*
Disallow:
Crawl-delay: 20
User-agent: 008
Disallow: /
would not have the desired result as user-agent 008 would first read the top guideline:
User-agent: *
Disallow:
Crawl-delay: 20
and then begin crawling your site, as it is first being told that All user-agents are disallowed to crawl no pages or directories.
The corrected way to write this would be:
User-agent: 008
Disallow: /
User-agent: *
Disallow:
Crawl-delay: 20
-
Hi Peter,
I've tested the robot.txt file in Webmaster Tools and it now seems to be working as it should and it seems Google is seeing the same file as I have on the server.
I'm afraid this side of things isn't' my area of expertise so it's been a bit of a minefield.
I've taken a subscription with sucuri.net and taken various other steps that hopefully will hel;p with security. But who knows?
Thanks,
Colin
-
Google is seeing the same Robots.txt content (in GWT) that you show in the physical file, right? I just want to make sure that, when the site was hacked, no changes were made that are showing different versions of files to Google. It sounds like that's not the case here, but it definitely can happen.
-
Blog isn't' showing now and my hosts say that the index.php file is missing from the directory but I can see it.
Strange.
Have contacted them again to see what the problem can be.
Bit of a wasted Saturday!
-
Thanks Keith. Just contacting out hosts.
Nightmare!
-
Looks like a 403 permissions problem, that's a server side error... Make sure you have the correct permissions set on the blog folder in IIS Personally I always host on Linux...
-
Mind you the whole blog is now showing an error message and cant' be viewed so looks like an afternoon of trial and error!
-
Thanks very much Keith. I've just edited the file as suggested.
I see the error but as I am the web guy I cant' figure out how to get rid of it.
I think it might be a plugin that's causing it so I'm going to disable the and re-able them one as a time.
I've just PM'd you by the way.
Thanks for your help Keith.
Colin
-
Use this:
**User-agent: * Disallow: /blog/wp-admin/ Disallow: /blog/wp-includes/ Sitemap: http://nile-cruises-4u.co.uk/sitemap.xml**
Any FYI, you have the following error on your blog:
Warning: is_readable() [function.is-readable]: open_basedir restriction in effect. File(D:\home\nile-cruises-4u.co.uk\wwwroot\blog/wp-content/plugins/D:\home\nile-cruises-4u.co.uk\wwwroot\blog\wp-content\plugins\websitedefender-wordpress-security/languages/WSDWP_SECURITY-en_US.mo) is not within the allowed path(s): (D:\home\nile-cruises-4u.co.uk\wwwroot) in D:\home\nile-cruises-4u.co.uk\wwwroot\blog\wp-includes\l10n.php on line **339 **
Get your web guy to look at that, it appears at the top of every blog page for me...
Hope that helps,
Keith
-
Thanks Keith.
Only part of our site is WP based. Would that be a problem using the example you kindly suggested?
-
I gave you an example of a basic robots.txt file that I use on one of my Wordpress sites above, I would suggest using that for now.
I would not bother messing around with crawl delay in robots.txt as Peter said above there are better ways to achieve this... Plus I doubt you need it any way.
Google caches the robots.txt info for about 24hrs normally in my experience... So it's possible the old cached version is still being used by Google.
-
Hi Guys,
Thanks so much for your help. As you say Troy, that's defintely not what I want.
I assumed when we were hacked (twice in 8 months) that it might have been a competitor as we are in a very competitive niche. Might be very wrong there but we have certainly lost our top ranking on Google.co.uk for our main key phrases and our now at about position 7 for the same key phrases after about 3 years at number 1.
So when I saw on Google Webmaster Tools yesterday that we had a severe health warning and that the Googlebot was being prevented crawling our site I thought it might be the aftereffects of the hack.
Today even though I changed the robot.txt file yesterday GWT is showing 1000 pages with errors, 285 Access Denied and 719 Not Found and this message: Googlebot is blocked from http://nile-cruises-4u.co.uk/
I've just tested the robot.txt via GWT and now get this message:
AllowedDetected as a directory; specific files may have different restrictionsSo maybe the pages will be able to access by Googlebot shortly and the Access Denied message will disappear.I've chaged the robot.txt file to
User-agent: *
Crawl-delay: 20But should I change it to a better version? Sorry guys, I'm an online travel agent and not great on coding and really techie stuff. Although I'm learning pretty quickly about the bad stuff!I seem to have a few problems getting this sorted and wonder if this is a part of why our page position is dropping? -
I would simplify your robots.txt to read something like:
**User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Sitemap: http://www.your-domain.com/sitemap.xml**
-
That's odd: "008" appears to be the user agent for "80legs", a custom crawler platform. I'm seeing it in other Robots.txt files.
-
I'm not 100% sure what he's seeing, but when I plug his robots.txt into the robots analysis tool, I get this back:
Googlebot blocked by line 5: Disallow: /
Detected as a directory; specific files may have different restrictions
However, when I gave the top "**User-agent: ***" the "Disallow: " it seemed to fix the problem. Like, it didn't understand that the **Disallow: / **was meant only for the 008 user-agent?
-
Not honestly sure what User-agent "008" is, but that seems harmless. Why the crawl delay? There are better ways to handle that than Robots.txt, if a crawler is giving you trouble.
Was there a specific message/error in GWT?
-
I think, if you have a robots.txt reading what you show above:
User-agent: * Crawl-delay: 20
User-agent: 008 Disallow: /
That just basically says, "Don't crawl my site at all" (The "Disallow: /" means, I'm not allowing anything to be crawled by any search engine that pays attention to robots.txt at all)
So...I'm guessing that's not what you want?
(Bah..ignore. "User-agent". I'm a fool)
Actually, this seems to have solved your issue...make sure you explicitly tell all other User-agents that they are allowed:
User-agent: * Disallow: Crawl-delay: 20
User-agent: 008 Disallow: /
The extra "Disallow:" under User-agent: * says "I'm not going to disallow anything to most user-agents." Then the Disallow under user-agent 008 seems to only apply to them.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Error 404 Search Console
Hi all, We have a number of 404 https status listed in Search Console even treated, not decrease. What happened: We launched a website with the urls www.meusite.com/url-abc. We launched these urls sitemap. Google has indexed. ... For some reason, the urls were changed four days later by some developer in my equipe. So I asked the redirection of URLs "old" already indexed to the new (of: / url-abc to / url-xyz) all correspondingly. I submit the sitemap with new urls. We fixed the internal links. And than marked as fixed in the Search Console. But it does not work! Has anyone had a similar experience? Thanks for any advice!
Intermediate & Advanced SEO | | mobic0 -
My wordpress site generating bad links
Hello Everyone, I have wordpress site Which is from last 20 days generating links like For Example http://www.domainname.com/game/965/wiki/キャラクター図鑑_レアリティ(★★★)_【ID:675】ワッツ・ステップニー htttp://www.domainname.com/nkpghfu_13356_gvgjq_tfjhnkt_jsj_296_82566_673_567_245 This is screenshot of webmaster tools http://prnt.sc/ccwh0e can please any expert check & Tell How this Link i am getting, Also What are steps i need take for removing this Errors, As it is harming my sites Flow As well As Rankings. Thanx in Advance
Intermediate & Advanced SEO | | innovativekrishna10 -
I'm setting up my online store in wordpress/woocommerce and want to avoid duplicate content.
Hi Mozers, Apparently I'm using unique content in the short description area and it displays on the pages next to the product photo which is great how it is, but adding informational description repeating on every product page going to hurt us in SEO? A. See here an actual product - (flagged for thin content in OSE)
Intermediate & Advanced SEO | | melinmellow
B. This is how i would like to set each product page to improve them: See here a sample product with additional information/content.
Here's my question: Setting my product pages to the B version would be considered as duplicate content by google?0 -
Should I nofollow my Wordpress tags?
I have a website that have a strong root domain (ranking on many terms) but the subpages (articles) doesn't rank well. My feeling is that the linkjuice is not flowing to them (not enough anyway). When I run site:http://mydomain.com I have my root as the first result and the next many results are tagpages on my sites. I have arund 180 index pages, and I need to go to down to result #50 give or take before I see any subpage using the site command. My website theme have the tags on every page possible. The tags are useful for my viewers, but not SEO useful, but I fear that they are dilluting my linkjuice. Should I nofollow and noindex them? Noindex makes sense (the tags are just duplicate content featuring snippets of text from the articles). But Nofollow would make sense too since I wouldn't send any linkjuice through the tags. What would you guys do? Bests regards
Intermediate & Advanced SEO | | claus101 -
Is WordPress a Blog in the eyes of Google?
Hi, My online Shop is based on WordPress with the WooCommerce plugin. Now, I have met a SEO guy who told me that's bad in the eyes of Google: Because Google apparently sees my website as a blog and not as a E-Commerce site. Wow, this statement really confused my, since I am working so hard on content and good rankings. Any opinions on this would be appreciated. Best, Robin
Intermediate & Advanced SEO | | soralsokal0 -
How to best serve images optimised for mobile devices in WordPress
Issue: Images too large for mobile devices in some articles, cannot be shrunk responsively, also should help reduce page size/improve site speed on small screen devices. I am thinking of switching depending on the user-agent, such as iPhone / Android devices and serving up an optimised, rediced size image. I envisage this working in the background / ie. hidden from authors so it is easy. Platform: WordPress Would like a solution or some feedback on people's experiences with this problem. No good plugins found that can handle this so would probably need to be custom coded, but no processing overhead, unless it is generated upon publication of article. Thanks peeps Keith H
Intermediate & Advanced SEO | | Greywood0 -
Significant Google crawl errors
We've got a site that continuously like clockwork encounters server errors with when Google crawls it. Since the end of last year it will go a week fine, then it will have two straight weeks of 70%-100% error rate when Google tries to crawl it. During this time you can still put the URL in and go to the site, but spider simulators return a 404 error. Just this morning we had another error message, I did a fetch and resubmit, and magically now it's back. We changed servers on it in Jan to Go Daddy because the previous server (Tronics) kept getting hacked. IIt's built in html so I'm wondering if it's something in the code maybe? http://www.campteam.com/
Intermediate & Advanced SEO | | GregWalt1 -
Have you ever seen this 404 error: 'www.mysite.com/Cached' in GWT?
Google webmaster tools just started showing some strange pages under "not found" crawl errors. www.mysite.com/Cached www.mysite.com/item-na... <--- with the three dots, INSTEAD of www.mysite.com/item-name/ I have just 301'd them for now, but is this a sign of a technical issue? The site is php/sql and I'm doing the URL rewrites/301s etc in .htaccess. Thanks! -Dan EDIT: Also, wanted to add, there is no 'linked to' page.
Intermediate & Advanced SEO | | evolvingSEO0