Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Google Search console says 'sitemap is blocked by robots?
-
Google Search console is telling me "Sitemap contains URLs which are blocked by robots.txt."
I don't understand why my sitemap is being blocked? My robots.txt look like this:
User-Agent: *
Disallow:It's a WordPress site, with Yoast SEO installed. Is anyone else having this issue with Google Search console? Does anyone know how I can fix this issue?
-
Nice happy to hear that do you work with Greg Reindel? He is a good friend I looked at your IP that is why I ask?
Tom
-
I agree with David
Hey is your dev Greg Reindel? If so you can call me for help PM me here for my info.
Thomas Zickell
-
Hey guys, I ended up disabling the sitemap option from YoastSEO, then installed the 'Google (XML) sitemap' plug-in. I re-submitted the sitemap to Google last night, and it came back with no issues. I'm glad to finally have this sorted out.
Thanks for all the help!
-
Hi Christian,
The current robots.txt shouldn't be blocking those URLs.
Did you or someone else recently change the robots.txt file? If so, give Google a few days to re-crawl your site.
Also, can you check what happens when you do a fetch and render on one of the blocked posts in Search Console? Do you have issues there?
Cheers,
David
-
I think you need to make an https robots.txt file if you are running https if running https
https://moz.com/blog/xml-sitemaps
`User-agent: * Disallow: /wp-admin/ Allow: /wp-admin/admin-ajax.php` Sitemap: https://domain.com/index-sitemap.xml(that is a https site map)can you send the sitemap URL or run it though deepcrawl
Hope this helps?
Did you make a new robots.txt file?
-
Thanks for the response. Do you think this is a robots.txt issue? Or could this be caused by the YoastSEO plugin?
Do you know if this plug-in works with YoastSEO together? Or will it cause issues?
-
Thank you for the response.
I just scanned the site using 'Screaming frog'. Under Internal>Directives there were zero 'no index' links. I also check for '404 errors', server 505 errors, or anything 'blocked by robots.txt'.
Google search console is still showing me that there are URL's being blocked by my sitemap. (I added a screenshot of this). When I click through, it tells me that the 'post sitemap' has over +300 warnings.
I have just deleted the YoastSEO plugin, and I am now re-installing it. hopefully, this fixes the issue.
-
No, you do not need to change or plug-in what is happening is Webmaster tools is telling you that you have no index or no follow were robots xTag somewhere on your URLs inside your sitemap.
Run your site through Moz, screaming frog Seo spider or deepcrawl and look for no indexed URLs.
webmaster tools/search console is telling you that you have no index URLs inside of your XML sitemap not that you robots.txt is blocking it. This would be set in the Yoast plugin. one way to correct it is to look for noindex URLs & filter them inside Yoast so they are not being presented to the crawlers.
If you would like you can turn off the sitemap on Yoast and turn it back on if that does not work I recommend completely removing the plug-in and reinstalling it
- https://kb.yoast.com/kb/how-can-i-uninstall-my-plugin/
- https://kinsta.com/blog/uninstall-wordpress-plugin/
Can you send a screenshot of what you're seeing?
When you see it in Google Webmaster tools are you talking about the XML sitemap itself mean no indexed because all XML sitemaps are no indexed.
Please add this to your robots.txt
`User-agent:* Disallow:/wp-admin/ Allow:/wp-admin/admin-ajax.php` Sitemap: http://www.website.com/sitemap_index.xmlI hope this is of help,
Tom
-
Hi,
Use this plugin
https://wordpress.org/plugins/wp-robots-txt/
it will remove previous robots.txt and set simple wordpress robots.txt and wait for a day
problem can be solved.
Also watch this video on the same @ https://www.youtube.com/watch?v=DZiyN07bbBM
Thanks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Google Search Console Still Reporting Errors After Fixes
Hello, I'm working on a website that was too bloated with content. We deleted many pages and set up redirects to newer pages. We also resolved an unreasonable amount of 400 errors on the site. I also removed several ancient sitemaps that listed content deleted years ago that Google was crawling. According to Moz and Screaming Frog, these errors have been resolved. We've submitted the fixes for validation in GSC, but the validation repeatedly fails. What could be going on here? How can we resolve these error in GSC.
Technical SEO | | tif-swedensky0 -
Abnormally high internal link reported in Google Search Console not matching Moz reports
If I'm looking at our internal link count and structure on Google Search Console, some pages are listed as having over a thousand internal links within our site. I've read that having too many internal links on a page devalues that page's PageRank, because the value is divided amongst the pages it links out to. Likewise, I've heard having too many internal links is just bad in general for SEO. Is that true? The problem I'm facing is determining how Google is "discovering" these internal links. If I'm just looking at one single page reported with, say, 1,350 links and I'm just looking at the code, it may only have 80 or 90 actual links. Moz will confirm this, as well. So why would Google Search Console report different? Should I be concerned about this?
Technical SEO | | Closetstogo0 -
Blocking Affiliate Links via robots.txt
Hi, I work with a client who has a large affiliate network pointing to their domain which is a large part of their inbound marketing strategy. All of these links point to a subdomain of affiliates.example.com, which then redirects the links through a 301 redirect to the relevant target page for the link. These links have been showing up in Webmaster Tools as top linking domains and also in the latest downloaded links reports. To follow guidelines and ensure that these links aren't counted by Google for either positive or negative impact on the site, we have added a block on the robots.txt of the affiliates.example.com subdomain, blocking search engines from crawling the full subddomain. The robots.txt file is the following code: User-agent: * Disallow: / We have authenticated the subdomain with Google Webmaster Tools and made certain that Google can reach and read the robots.txt file. We know they are being blocked from reading the affiliates subdomain. However, we added this affiliates subdomain block a few weeks ago to the robots.txt, but links are still showing up in the latest downloads report as first being discovered after we added the block. It's been a few weeks already, and we want to make sure that the block was implemented properly and that these links aren't being used to negatively impact the site. Any suggestions or clarification would be helpful - if the subdomain is being blocked for the search engines, why are the search engines following the links and reporting them in the www.example.com subdomain GWMT account as latest links. And if the block is implemented properly, will the total number of links pointing to our site as reported in the links to your site section be reduced, or does this not have an impact on that figure?From a development standpoint, it's a much easier fix for us to adjust the robots.txt file than to change the affiliate linking connection from a 301 to a 302, which is why we decided to go with this option.Any help you can offer will be greatly appreciated.Thanks,Mark
Technical SEO | | Mark_Ginsberg0 -
Adding multi-language sitemaps to robots.txt
I am working on a revamped multi-language site that has moved to Magento. Each language runs off the core coding so there are no sub-directories per language. The developer has created sitemaps which have been uploaded to their respective GWT accounts. They have placed the sitemaps in new directories such as: /sitemap/uk/sitemap.xml /sitemap/de/sitemap.xml I want to add the sitemaps to the robots.txt but can't figure out how to do it. Also should they have placed the sitemaps in a single location with the file identifying each language: /sitemap/uk-sitemap.xml /sitemap/de-sitemap.xml What is the cleanest way of handling these sitemaps and can/should I get them on robots.txt?
Technical SEO | | MickEdwards0 -
I accidentally blocked Google with Robots.txt. What next?
Last week I uploaded my site and forgot to remove the robots.txt file with this text: User-agent: * Disallow: / I dropped from page 11 on my main keywords to past page 50. I caught it 2-3 days later and have now fixed it. I re-imported my site map with Webmaster Tools and I also did a Fetch as Google through Webmaster Tools. I tweeted out my URL to hopefully get Google to crawl it faster too. Webmaster Tools no longer says that the site is experiencing outages, but when I look at my blocked URLs it still says 249 are blocked. That's actually gone up since I made the fix. In the Google search results, it still no longer has my page title and the description still says "A description for this result is not available because of this site's robots.txt – learn more." How will this affect me long-term? When will I recover my rankings? Is there anything else I can do? Thanks for your input! www.decalsforthewall.com
Technical SEO | | Webmaster1230 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Should we use Google's crawl delay setting?
We’ve been noticing a huge uptick in Google’s spidering lately, and along with it a notable worsening of render times. Yesterday, for example, Google spidered our site at a rate of 30:1 (google spider vs. organic traffic.) So in other words, for every organic page request, Google hits the site 30 times. Our render times have lengthened to an avg. of 2 seconds (and up to 2.5 seconds). Before this renewed interest Google has taken in us we were seeing closer to one second average render times, and often half of that. A year ago, the ratio of Spider to Organic was between 6:1 and 10:1. Is requesting a crawl-delay from Googlebot a viable option? Our goal would be only to reduce Googlebot traffic, and hopefully improve render times and organic traffic. Thanks, Trisha
Technical SEO | | lzhao0