Scary bug in search console: All our pages reported as being blocked by robots.txt after https migration
-
We just migrated to https and created 2 days ago a new property in search console for the https domain.
Webmaster Tools account for the https domain now shows for every page in our sitemap the warning: "Sitemap contains urls which are blocked by robots.txt."Also in the dashboard of the search console it shows a red triangle with warning that our root domain would be blocked by robots.txt. 1) When I test the URLs in search console robots.txt test tool all looks fine.2) When I fetch as google and render the page it renders and indexes without problem (would not if it was really blocked in robots.txt)3) We temporarily completely emptied the robots.txt, submitted it in search console and uploaded sitemap again and same warnings even though no robots.txt was online4) We run screaming frog crawl on whole website and it indicates that there is no page blocked by robots.txt5) We carefully revised the whole robots.txt and it does not contain any row that blocks relevant content on our site or our root domain. (same robots.txt was online for last decade in http version without problem)6) In big webmaster tools I could upload the sitemap and so far no error reported.7) we resubmitted sitemaps and same issue8) I see our root domain already with https in google SERPThe site is https://www.languagecourse.netSince the site has significant traffic, if google would really interpret for any reason that our site is blocked by robots we will be in serious trouble.
This is really scary, so even if it is just a bug in search console and does not affect crawling of the site, it would be great if someone from google could have a look into the reason for this since for a site owner this really can increase cortisol to unhealthy levels.Anybody ever experienced the same problem?Anybody has an idea where we could report/post this issue? -
Hi icourse, thanks for your question! You've received some thoughtful responses. Did any of them help you sort your issue out? If so, please mark one or more as a "Good Answer." Thanks!
Christy
-
I'd still speak with the hosting provider. It may be a firewall setting, not the CDN.
-
Hi Donna, we are using cloudflare which may have blocked your scan/bot.
We disabled cloudflare temporarily and submitted new robots.txt and new sitemap and still get the same warnings. -
I recently updated a Wordpress website from noindex to wanting it indexed and I still had a warning in Search Console for a day or two and the homepage was initially indexed with the meta description saying "A description for this website ...", even though the actual Fetch and Render and also Robots.txt test was just fine. If you are absolutely sure there isn't anything wrong, I would maybe give it a couple of days. In our case, I resubmitted the homepage to Google to speed up the process and that fixed it.
-
Have you checked with your website hosting provider? They may be blocking bots at a server level. I know when I tried to scan your site I got a connection timeout error.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallow URLs ENDING with certain values in robots.txt?
Is there any way to disallow URLs ending in a certain value? For example, if I have the following product page URL: http://website.com/category/product1, and I want to disallow /category/product1/review, /category/product2/review, etc. without disallowing the product pages themselves, is there any shortcut to do this, or must I disallow each gallery page individually?
Intermediate & Advanced SEO | | jmorehouse0 -
Canonical URL on search result pages
Hi there, Our company sells educational videos to Nurses via subscription. I've been looking at their video search results page:
Intermediate & Advanced SEO | | 9868john
http://www.nursesfornurses.com.au/cpd When you click on a category, the URL appears like this:
http://www.nursesfornurses.com.au/cpd?view=category&cat=9&name=Acute+Surgical+Nursing
http://www.nursesfornurses.com.au/cpd?view=category&cat=6&name=Medications Would this be an instance where i'd use the canonical tag to redirect each search results page? Bearing in mind the /cpd page is under /Nursing cpd, and that /Nursing cpd is our best performing page in search engines, would it be better to refer it to the 'Nursing CPD' rather than 'CPD' page? Any advice is very welcome,
Thanks,
John0 -
Will Using Attributes For Landing Pages In Magento Dilute Page Rank?
Hello Mozzers! We have an ecommerce site built on Magento. We would like to use attribute filters in our layered navigation for landing page purposes. Each page will have a unique URL, Meta Title and Meta Description. For example: URL: domain.com/art/abstract (category is Art, attribute is Abstract) Title: Abstract Art For Sale Meta: Blah Blah Blah Currently these attribute pages are not being indexed by google as they are set in google parameters. We would like to edit google parameters to start indexing some of the attribute filters that users search for, so they can be used as landing pages. Does anyone have experience with this? Is this a good idea? What are the consequences? Will this dilute Page Rank? Could this destroy the world? Cheers! MozAddict
Intermediate & Advanced SEO | | MozAddict0 -
Best way to remove low quality paginated search pages
I have a website that has around 90k pages indexed, but after doing the math I realized that I only have around 20-30k pages that are actually high quality, the rest are paginated pages from search results within my website. Every time someone searches a term on my site, that term would get its own page, which would include all of the relevant posts that are associated with that search term/tag. My site had around 20k different search terms, all being indexed. I have paused new search terms from being indexed, but what I want to know is if the best route would be to 404 all of the useless paginated pages from the search term pages. And if so, how many should I remove at one time? There must be 40-50k paginated pages and I am curious to know what would be the best bet from an SEO standpoint. All feedback is greatly appreciated. Thanks.
Intermediate & Advanced SEO | | WebServiceConsulting.com0 -
Robots.txt
What would be a perfect robots.txt file my site is propdental.es Can i just place: User-agent: * Or should i write something more???
Intermediate & Advanced SEO | | maestrosonrisas0 -
How to do a site migration followed by a domain migration and avoid 301 redirect chains?
Hi all, The current roadmap for our Eng team has us performing a site migration (redirecting one subfolder to another subfolder) and then a domain migration shortly after. The way I see it, I have 2 scenarios (the 1st involves the site migration THEN the domain migration and the 2nd is the site migration and domain migration being done simultaneously): olddomain.com/subfolder-old to olddomain.com/subfolder-new THEN olddomain.com/subfolder-new to newdomain.com/subfolder-new AND olddomain.com/subfolder-old to newdomain.com/subfolder-new olddomain.com/subfolder-old to newdomain.com/subfolder-new I also understand that there are two best practices for a domain migration and they are 1) keep everything the same that you can to help Google understand it is the same page, just on a different domain and 2) avoid chain redirects. As you can imagine, scenario 1 requires more Eng costs than scenario 2. So, my question is, is scenario 2 a perfectly viable option or should I make the push to go for scenario 1? Any advice is greatly appreciated!
Intermediate & Advanced SEO | | brad-causes1 -
202 error page set in robots.txt versus using crawl-able 404 error
We currently have our error page set up as a 202 page that is unreachable by the search engines as it is currently in our robots.txt file. Should the current error page be a 404 error page and reachable by the search engines? Is there more value or is it a better practice to use 404 over a 202? We noticed in our Google Webmaster account we have a number of broken links pointing the site, but the 404 error page was not accessible. If you have any insight that would be great, if you have any questions please let me know. Thanks, VPSEO
Intermediate & Advanced SEO | | VPSEO0 -
Subdomains - duplicate content - robots.txt
Our corporate site provides MLS data to users, with the end goal of generating leads. Each registered lead is assigned to an agent, essentially in a round robin fashion. However we also give each agent a domain of their choosing that points to our corporate website. The domain can be whatever they want, but upon loading it is immediately directed to a subdomain. For example, www.agentsmith.com would be redirected to agentsmith.corporatedomain.com. Finally, any leads generated from agentsmith.easystreetrealty-indy.com are always assigned to Agent Smith instead of the agent pool (by parsing the current host name). In order to avoid being penalized for duplicate content, any page that is viewed on one of the agent subdomains always has a canonical link pointing to the corporate host name (www.corporatedomain.com). The only content difference between our corporate site and an agent subdomain is the phone number and contact email address where applicable. Two questions: Can/should we use robots.txt or robot meta tags to tell crawlers to ignore these subdomains, but obviously not the corporate domain? If question 1 is yes, would it be better for SEO to do that, or leave it how it is?
Intermediate & Advanced SEO | | EasyStreet0