Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Removing robots.txt on WordPress site problem
-
Hi..am a little confused since I ticked the box in WordPress to allow search engines to now crawl my site (previously asked for them not to) but Google webmaster tools is telling me I still have robots.txt blocking them so am unable to submit the sitemap.
Checked source code and the robots instruction has gone so a little lost. Any ideas please?
-
Hi,
I edited the robots.txt file for my website http://debtfreefrombankruptcy.com yesterday to allow search engines to crawl my site. However, Google isn't recognizing the new file and is still saying that my sitemap is blocked from search. Here is a link to the file itself:
http://www.debtfreefrombankruptcy.com/robots.txt
The Blocked URLs tester said that the file allows Google to crawl the site, but in actuality it still isn't recognizing the new file. Any advice would be appreciated. Thanks!
-
I can help you out as this issue DROVE ME NUTS.
1. I didnt have a Robots.txt (yet)
2. I had Yoast installed
3. Im pretty sure it created a Robots.txt even though it doesnt exist in my root (.com/here)
4. My Google webmaster tools shows this
User-agent: Disallow: /wp-admin/ Disallow: /wp-includes/ Disallow: /cgi-bin Disallow: /wp-admin Disallow: /wp-includes Disallow: /wp-content/plugins Disallow: /plugins Disallow: /wp-content/cache Disallow: /wp-content/themes Disallow: /trackback Disallow: /feed Disallow: /comments Disallow: /category//* Disallow: /trackback Disallow: /feed Disallow: /comments Disallow: /? Disallow: /?Allow: /wp-content/uploadsAllow: /assets Create a Robots.txt
1. login to wordpress 2. Click SEO in your side toolbar (Yoast WordPress Plugin settings) 3. Go to edit files under SEO (in the side toolbar)
And now you have the option to edit your Robots.txt file.
-
Hi Sophia
I just checked and see your homepage indexed in google.co.uk with a cache date of April 26th. You should be all set!
-Dan
-
Quick update - by amending the robots text file and switching sitemap plugin over to Yoast I finally got the sitemap to index without robots.txt warnings although the Home page of site was not indexed, 'oh dear'. 5 out of the 7 pages in the sitemap were indexed by Google so It's a start but some more investigating to be done on my side.
-
Dan,
Cant thank you enough! The sitemap request is still pending in Google - maybe I sent too many requests But it's time to sit back and wait for the good news hopefully. Thanks again.
-
Hi Sofia
I just ran the same validator on your sitemap and it went through fine - see screenshot
I intended to mean that you should just be sure Google Webmaster Tools accepts the sitemap as valid - if so, there's no need to run through a 3rd party validator. Apologies if I didn't state it clearly!
Let me know, but from what I can see it looks good!
-Dan
EDIT - Looking more closely, it looks like your ran the homepage through the validator - you would actually enter the sitemap address its self in the validator - http://containerforsale.co.uk/sitemap.xml
-
Hi Dan,
I followed the above advice and switched to the Yoast generated sitemap but after testing on http://www.xml-sitemaps.com/validate-xml-sitemap.html I got the following result - no idea what it means but it looks nasty...
Schema validating with XSV 3.1-1 of 2007/12/11 16:20:05Schema validator crashed
The maintainers of XSV will be notified, you don't need to
send mail about this unless you have extra information to provide.
If there are Schema errors reported below, try correcting
them and re-running the validation.Target: http://containerforsale.co.uk
(Real name: http://containerforsale.co.uk
Server: Apache/2.2.22 (Unix) mod_ssl/2.2.22 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4)The target was not assessedLow-level XML well-formedness and/or validity processing output
Warning: Undefined entity raquo
in unnamed entity at line 16 char 83 of http://containerforsale.co.uk
Warning: Undefined entity nbsp
in unnamed entity at line 160 char 10 of http://containerforsale.co.uk
Error: Expected ; after entity name, but got =
in unnamed entity at line 274 char 631 of http://containerforsale.co.u -
Sofia
You are using Yoast SEO plugin for WordPress, so use the XML sitemap within Yoast. You don't need a separate plugin for the XML sitemap. And yes, within Yoast turn the sitemap on.
Hope that helps!
-Dan
-
Indeed, thanks everyone - it's really appreciated!
I have updated the robots.txt as indicated and re submitted site map but looks like Google still has problems with my site since the error warning for robots is there after the processing is done.
Quick question - I am using a plugin called Google XML Sitemaps which has the following tick box option.
'Add sitemap URL to the virtual robots.txt file'.
The virtual robots.txt generated by WordPress is used. A real robots.txt file must NOT exist in the blog directory!'Should this box be ticked or un-ticked please? Fyi I currently don't have the box ticked.
-
Thanks guys for all the responses and helping!
Three Things to try
1.Fix Robots.txt
Sofia - I just checked your robots.txt now and it reads;
User-agent: * Disallow: Sitemap: http://containerforsale.co.uk/sitemap.xml.gz
- with the sitemap on the same line as disallow - I'd check on that and make sure its on a separate line.
- ALSO, you don't need the .gz on the sitemap file just sitemap.xml
2. Re-submit Sitemap
- RESUBMIT your sitemap to webmaster tools and make sure its valid.
3. Submit URL to Webmaster Tools (only last resort)
this is only last case scenario, shouldn't have to do this on the homepage if everything is correct.
- go to fetch as googlebot ->run the fetch ->then submit URL
- do this for the homepage
- see article on google blog for reference
Let us know if you're all set, thanks!
-Dan
-
Ok thanks Brent, I changed to
User-agent: *
Disallow:
Sitemap: http://containerforsale.co.uk/sitemap.xml.gz
Guess I will just have to wait for Google to refresh now...
-
yes, the urls being blocked are includes from your Wordpress program.
-
Thanks for the heads up.
The warning just says 7 Url''s blocked by robots.txt. - have seen this issue posted on the WordPress boards by others but no real insight into solutions.
Perhaps I should try your idea of
Change the robots.txt file to this:
User-agent *
Disallow:
-
Well there is a robots.txt file. You can view it here: http://containerforsale.co.uk/robots.txt
What warnings are you getting in your sitemap submission area? It appears to look alright: http://containerforsale.co.uk/sitemap.xml But I tried to validate it and got a 504 Gateway Time-out error. http://www.xml-sitemaps.com/index.php?op=validate-xml-sitemap&go=1&sitemapurl=http%3A%2F%2Fcontainerforsale.co.uk%2Fsitemap.xml&submit=Validate
-
Its weird, the front page warning on Google webmaster for robots has disappeared now, but still got the warnings in the sitemap submission area. My host suggests I just wait a bit longer for Google to update because he said same as you - that there doesn't seem to be any robot.txt file.
-
Doesn't appear to be blocked, so maybe it has something to do with your /wp-includes/ directory.
Change the robots.txt file to this:
User-agent *
Disallow:
-
Hey Guys,
Thanks for your replies...the domain is http://containerforsale.co.uk ,My host told me to look in the Public HTML file folder for the robots.txt file and just delete it but can't see it in there?
My host said he found a tester site and it doesn't report any issues:
http://www.searchenginepromotionhelp.com/m/robots-text-tester/robots-checker.php
This is the display I get from http://containerforsale.co.uk/robots.txt
User-agent *
Disallow: /wp-admin/
Disallow: /wp-includes/ -
Hi Sofia,
Two things you need to consider when troubleshooting this:
The actual robots.txt file (located in the root directory of your site) and the meta-robots tags in the section of your HTML. When you say you checked the source code and the robots instructions were missing, I think you were talking about the meta-robots tags in the actual HTML of your site.
Webmaster Tools is probably referring to the actual robots.txt file in your domain's root path, which would differ entirely and not be visible by checking the HTML on your site. Like Nakul and Brent said, if you'll let us know your site's URL and paste the content of your robots.txt file here, I'm sure one of us can help you resolve the problem fairly quickly.
Thanks!
Anthony
-
copy whatever you have in your robots.txt file here and we will tell you the issue.
SEOmoz has a great article about Robots.txt files here: http://www.seomoz.org/learn-seo/robotstxt
-
The robots.txt would probably not be a part of the Wordpress Configuration. Allow indexing is controlled via Meta Data by the Wordpress Architecture.
I would look for something like this in yourdomain.com/robots.txt
disallow /
or something like that. If that does not help, PM me your site URL and I would be glad to look it up for you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt allows wp-admin/admin-ajax.php
Hello, Mozzers!
Technical SEO | | AndyKubrin
I noticed something peculiar in the robots.txt used by one of my clients: Allow: /wp-admin/admin-ajax.php What would be the purpose of allowing a search engine to crawl this file?
Is it OK? Should I do something about it?
Everything else on /wp-admin/ is disallowed.
Thanks in advance for your help.
-AK:2 -
I have two robots.txt pages for www and non-www version. Will that be a problem?
There are two robots.txt pages. One for www version and another for non-www version though I have moved to the non-www version.
Technical SEO | | ramb0 -
Multiple robots.txt files on server
Hi! I have previously hired a developer to put up my site and noticed afterwards that he did not know much about SEO. This lead me to starting to learn myself and applying some changes step by step. One of the things I am currently doing is inserting sitemap reference in robots.txt file (which was not there before). But just now when I wanted to upload the file via FTP to my server I found multiple ones - in different sizes - and I dont know what to do with them? Can I remove them? I have downloaded and opened them and they seem to be 2 textfiles and 2 dupplicates. Names: robots.txt (original dupplicate)
Technical SEO | | mjukhud
robots.txt-Original (original)
robots.txt-NEW (other content)
robots.txt-Working (other content dupplicate) Would really appreciate help and expertise suggestions. Thanks!0 -
Removing a large number of unnecessary pages from a site
Hi all, I got a big problem with my website. I have a lot of page, duplicate page made from various combinations of selects, and for all this duplicate content we've be hit by a panda update 2 years ago. I don't want to bring new content an all of these pages, about 3.000.000, because most of them are unnecessary. Google indexed all of them (3.000.000), and I want to redirect the pages that I don't need anymore to the most important ones. My question, is there any problem in how google will see this change, because after this it will remain only 5000-6000 relevant pages?
Technical SEO | | Silviu0 -
Blocking Affiliate Links via robots.txt
Hi, I work with a client who has a large affiliate network pointing to their domain which is a large part of their inbound marketing strategy. All of these links point to a subdomain of affiliates.example.com, which then redirects the links through a 301 redirect to the relevant target page for the link. These links have been showing up in Webmaster Tools as top linking domains and also in the latest downloaded links reports. To follow guidelines and ensure that these links aren't counted by Google for either positive or negative impact on the site, we have added a block on the robots.txt of the affiliates.example.com subdomain, blocking search engines from crawling the full subddomain. The robots.txt file is the following code: User-agent: * Disallow: / We have authenticated the subdomain with Google Webmaster Tools and made certain that Google can reach and read the robots.txt file. We know they are being blocked from reading the affiliates subdomain. However, we added this affiliates subdomain block a few weeks ago to the robots.txt, but links are still showing up in the latest downloads report as first being discovered after we added the block. It's been a few weeks already, and we want to make sure that the block was implemented properly and that these links aren't being used to negatively impact the site. Any suggestions or clarification would be helpful - if the subdomain is being blocked for the search engines, why are the search engines following the links and reporting them in the www.example.com subdomain GWMT account as latest links. And if the block is implemented properly, will the total number of links pointing to our site as reported in the links to your site section be reduced, or does this not have an impact on that figure?From a development standpoint, it's a much easier fix for us to adjust the robots.txt file than to change the affiliate linking connection from a 301 to a 302, which is why we decided to go with this option.Any help you can offer will be greatly appreciated.Thanks,Mark
Technical SEO | | Mark_Ginsberg0 -
Removing images from site and Image Sitemap SEO advice
Hello again, I have received an update request where they want me to remove images from this site (as of now its a bunch of thumbnails) current page design: http://1stimpressions.com/portfolio/car-wraps/ and turn it into a new design which utilized a slider (such as this): http://1stimpressions.com/portfolio/ They don't want the thumbnails on the page anymore. My question is since my site has a image sitemap that has been indexed will removing all the images hurt my SEO greatly? What would the recommended steps to take to reduce any SEO damage be, if so? Thank you again for your help, always great and very helpful feedback! 🙂 cheers!
Technical SEO | | allstatetransmission0 -
Mobile site ranking instead of/as well as desktop site in desktop SERPS
I have just noticed that the mobile version of my site is sometimes ranking in the desktop serps either instead of as well as the desktop site. It is not something that I have noticed in the past as it doesn't happen with the keywords that I track, which are highly competitive. It is happening for results that include our brand name, e.g '[brand name][search term]'. The mobile site is served with mobile optimised content from another URL. e.g wwww.domain.com/productpage redirects to m.domain.com/productpage for mobile. Sometimes I am only seen the mobile URL in the desktop SERPS, other times I am seeing both the desktop and mobile URL for the same product. My understanding is that the mobile URL should not be ranking at all in desktop SERPS, could we be being penalised for either bad redirects or duplicate content? Any ideas as to how I could further diagnose and solve the problem if you do believe that it could be harming rankings?
Technical SEO | | pugh0 -
Oh no googlebot can not access my robots.txt file
I just receive a n error message from google webmaster Wonder it was something to do with Yoast plugin. Could somebody help me with troubleshooting this? Here's original message Over the last 24 hours, Googlebot encountered 189 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 100.0%. Recommended action If the site error rate is 100%: Using a web browser, attempt to access http://www.soobumimphotography.com//robots.txt. If you are able to access it from your browser, then your site may be configured to deny access to googlebot. Check the configuration of your firewall and site to ensure that you are not denying access to googlebot. If your robots.txt is a static page, verify that your web service has proper permissions to access the file. If your robots.txt is dynamically generated, verify that the scripts that generate the robots.txt are properly configured and have permission to run. Check the logs for your website to see if your scripts are failing, and if so attempt to diagnose the cause of the failure. If the site error rate is less than 100%: Using Webmaster Tools, find a day with a high error rate and examine the logs for your web server for that day. Look for errors accessing robots.txt in the logs for that day and fix the causes of those errors. The most likely explanation is that your site is overloaded. Contact your hosting provider and discuss reconfiguring your web server or adding more resources to your website. After you think you've fixed the problem, use Fetch as Google to fetch http://www.soobumimphotography.com//robots.txt to verify that Googlebot can properly access your site.
Technical SEO | | BistosAmerica0