Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Oh no googlebot can not access my robots.txt file
-
I just receive a n error message from google webmaster
Wonder it was something to do with Yoast plugin.
Could somebody help me with troubleshooting this?
Here's original message
Over the last 24 hours, Googlebot encountered 189 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 100.0%.
Recommended action
If the site error rate is 100%:
- Using a web browser, attempt to access http://www.soobumimphotography.com//robots.txt. If you are able to access it from your browser, then your site may be configured to deny access to googlebot. Check the configuration of your firewall and site to ensure that you are not denying access to googlebot.
- If your robots.txt is a static page, verify that your web service has proper permissions to access the file.
- If your robots.txt is dynamically generated, verify that the scripts that generate the robots.txt are properly configured and have permission to run. Check the logs for your website to see if your scripts are failing, and if so attempt to diagnose the cause of the failure.
If the site error rate is less than 100%:
- Using Webmaster Tools, find a day with a high error rate and examine the logs for your web server for that day. Look for errors accessing robots.txt in the logs for that day and fix the causes of those errors.
- The most likely explanation is that your site is overloaded. Contact your hosting provider and discuss reconfiguring your web server or adding more resources to your website.
After you think you've fixed the problem, use Fetch as Google to fetch http://www.soobumimphotography.com//robots.txt to verify that Googlebot can properly access your site.
-
I can open text file but Godaddy told me robots.txt file is not on my server (root level).
Also told me that my site is not crawled because robot.txt file is not there.
Basically all of those might have resulted from plug in I was using (term optimizer)
Based on what Godaddy told me, my .htaccess file was crashed because of that and had to be recreated. So now .htaceess file is good.
Now I have to figure out is why my site is not accessible from Googlebot.
Let me know Keith if this is a quick fix or need some time to troubleshoot. You can send me a message to discuss about fees if nessary.
Thanks again
-
Hi,
You have a robots.txt file here: http://www.soobumimphotography.com/robots.txt
Can you write this again in English so it makes sense?
"I called Godaddy and told me if I used any plug ins etc. Godaddy fixed .htaccss file and my site was up and runningjust fine."
Yes google xml sitemaps will add the location of your stitemap to the robots.txt file - but there is nothing wrong with your robots.txt file.
-
I just called Godaddy and told me that I don't have robots.txt tile. Can anyone help with this issue?
So here's what happen:
I purchased Joos de Vailk's Term Optimizer to consolidate tags etc.
As soon as I installed & opened it, my site crashed.
I called Godaddy and told me if I used any plug ins etc. Godaddy fixed .htaccss file and my site was up and runningjust fine.
Isn't plugin like the Google XML Sitemaps automatically generates robots.txt file?
-
Yes, my site was down.
-
I had a .htaccess issue past 24 hour with plug in and Godaddy had fixed it for me.
I think this caused problem.
I just fetched again and still getting unreachable page. I wonder if I have bad .htaccess file
-
Was your site down during this period?
I would recommend setting up pingdom.com (free site monitoring), this will email you if your site goes down - I suspect this is a hosting related issue.
FYI, I can access your robots.txt fine from here.
-
Hi Bistoss, You should log into Google Webmaster Tools to check the day the problem occurred. It is not uncommon for host to have problems that temporarily cause access problems. In some rare cases Google itself could be having problems. For example, in July we had 1 day with a 11% failure rate, it was the host. Since then no problems. If your problems are persistent, then you may have an issue like this: http://blog.jitbit.com/2012/08/fixing-googlebot-cant-access-your-site.html old Analytic code. Other things to look at is any recent changes, specifically anything that had to do with .htaccess Be sure to use the FETCH AS GOOGLE bot after any changes to verify that Google can now crawl your site. Hope this helps
-
I also use Robots Meta Configuration plug in
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have two robots.txt pages for www and non-www version. Will that be a problem?
There are two robots.txt pages. One for www version and another for non-www version though I have moved to the non-www version.
Technical SEO | | ramb0 -
Crawl solutions for landing pages that don't contain a robots.txt file?
My site (www.nomader.com) is currently built on Instapage, which does not offer the ability to add a robots.txt file. I plan to migrate to a Shopify site in the coming months, but for now the Instapage site is my primary website. In the interim, would you suggest that I manually request a Google crawl through the search console tool? If so, how often? Any other suggestions for countering this Meta Noindex issue?
Technical SEO | | Nomader1 -
How can I get a photo album indexed by Google?
We have a lot of photos on our website. Unfortunately most of them don't seem to be indexed by Google. We run a party website. One of the things we do, is take pictures at events and put them on the site. An event page with a photo album, can have anywhere between 100 and 750 photo's. For each foto's there is a thumbnail on the page. The thumbnails are lazy loaded by showing a placeholder and loading the picture right before it comes onscreen. There is no pagination of infinite scrolling. Thumbnails don't have an alt text. Each thumbnail links to a picture page. This page only shows the base HTML structure (menu, etc), the image and a close button. The image has a src attribute with full size image, a srcset with several sizes for responsive design and an alt text. There is no real textual content on an image page. (Note that when a user clicks on the thumbnail, the large image is loaded using JavaScript and we mimic the page change. I think it doesn't matter, but am unsure.) I'd like that full size images should be indexed by Google and found with Google image search. Thumbnails should not be indexed (or ignored). Unfortunately most pictures aren't found or their thumbnail is shown. Moz is giving telling me that all the picture pages are duplicate content (19,521 issues), as they are all the same with the exception of the image. The page title isn't the same but similar for all images of an album. Example: On the "A day at the park" event page, we have 136 pictures. A site search on "a day at the park" foto, only reveals two photo's of the albums. 3QolbbI.png QTQVxqY.jpg mwEG90S.jpg
Technical SEO | | jasny0 -
Some URLs were not accessible to Googlebot due to an HTTP status error.
Hello I'm a seo newbie and some help from the community here would be greatly appreciated. I have submitted the sitemap of my website in google webmasters tools and now I got this warning: "When we tested a sample of the URLs from your Sitemap, we found that some URLs were not accessible to Googlebot due to an HTTP status error. All accessible URLs will still be submitted." How do I fix this? What should I do? Many thanks in advance.
Technical SEO | | GoldenRanking140 -
Are robots.txt wildcards still valid? If so, what is the proper syntax for setting this up?
I've got several URL's that I need to disallow in my robots.txt file. For example, I've got several documents that I don't want indexed and filters that are getting flagged as duplicate content. Rather than typing in thousands of URL's I was hoping that wildcards were still valid.
Technical SEO | | mkhGT0 -
Google insists robots.txt is blocking... but it isn't.
I recently launched a new website. During development, I'd enabled the option in WordPress to prevent search engines from indexing the site. When the site went public (over 24 hours ago), I cleared that option. At that point, I added a specific robots.txt file that only disallowed a couple directories of files. You can view the robots.txt at http://photogeardeals.com/robots.txt Google (via Webmaster tools) is insisting that my robots.txt file contains a "Disallow: /" on line 2 and that it's preventing Google from indexing the site and preventing me from submitting a sitemap. These errors are showing both in the sitemap section of Webmaster tools as well as the Blocked URLs section. Bing's webmaster tools are able to read the site and sitemap just fine. Any idea why Google insists I'm disallowing everything even after telling it to re-fetch?
Technical SEO | | ahockley0 -
Temporarily suspend Googlebot without blocking users
We'll soon be launching a redesign, on a new platform, migrating millions of pages to new URLs. How can I tell Google (and other crawlers) to temporarily (a day or two) ignore my site? We're hoping to buy ourselves a small bit of time to verify redirects and live functionality before allowing Google to crawl and index the new architecture. GWT's recommendation is to 503 all pages - including robots.txt, but that also makes the site invisible to real site visitors, resulting in significant business loss. Bad answer. I've heard some recommendations to disallow all user agents in robots.txt. Any answer that puts the millions of pages we already have indexed at risk is also a bad answer. Thanks
Technical SEO | | lzhao0