804 error preventing website being crawled
-
Hi
For both subdomains https://us.sagepub.com and https://uk.sagepub.com crawling is being prevented by a 804 error.
I can't see any reason why this should be so as all content is served through https.
Thanks
-
I'm afraid that's the case if you're using Cloudfront
Our system really ought to be able to handle that type of configuration, but it's proving to be quite an undertaking to make those changes from an engineering perspective. So, I'm not really able to say when we'll be able to accommodate SNI with our crawler. Sorry for the trouble!
-
I should have read the whole thread...I read the first part.
We use Cloudfront so I guess Moz cannot crawl the sites
-
Hi there.
The problem would be misconfigured SSL.
There was same q&a here: https://moz.com/community/q/804-https-ssl-error
Read that, see if it answers your question
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I build DA from 15 to 26 in 45 days, but now it stop. We have links from 50+ high authority websites and building more.
Hi Team I build my website domain authority from 15 to 26 points in the last 2 months. We getting links from high DA traffic websites all across the globe through guest posting. We grow from 34 linking domains to 155 in this duration(45days) and still, we are working on getting quality links through authority websites. My DA stops on 26 which is not growing for last 2 weeks. Please share your views on it. Thanks in advance
Link Explorer | | AtulSharan1 -
Error Message on Moz Crawler
Hi all, Just ran into this issue, when analysing this site. Just got this message when using MOZ "Page Optimisation Error". Anyone know why? It seems to be working fine on other SEO analyser tools. Website is: www.sbpcreativemedia.com.au Thanks in advance! luXS8V5
Link Explorer | | Dushala0 -
Campaign shows website links from Https. My site is not https but http:// HELP
When looking at my web campaign, I have inbound links pointing from every page located on my menu from https:// I do not have an https. website in any way. The weird part is that when I click on the link, it is a replica of my homepage, although in text only format. I have attached a picture for reference. Why and how could my top links be coming from this? I do not have any inbound links non https:// from my website Thank you! BNHHc5u
Link Explorer | | Morg56850 -
Getting "google bloking" in results of Crawl
What is the meaning of this in Excell results of crawling a website: multilingues.eu <colgroup><col width="165"> <col width="149"> <col width="139"></colgroup>
Link Explorer | | FernandoH.Silva
| | | |
| Blocking Google | Blocking Yahoo | Blocking Bing |
| | | |
| 312 | 14 | 187 |
| | | |
| 66 | 1 | 0 |
| | | |
| 46 | 2 | 1 |
| | | |0 -
I have a robots.txt error on Moz but not on Google Webmaster tools. Wondering what to do.
For the site www.patrickwerry.com, I'm getting a DA of 1 and a Error Code 612: Error response for robots.txt However, when I check webmaster tools, it's showing no errors and allowing robots.txt for the domain. Is there anything I can do to fix the issue on the Moz side so I can get better data? If you can respond in layman's terms even better. 🙂 Not an SEO. Lisa
Link Explorer | | LisaGerber0 -
Why won't Open Site Explorer detect inbound links from big name websites?
I understand smaller, spammy sites woudn't get indexed, but some of the websites linking to mine have high domain authorities. The website is a parked subdomain.
Link Explorer | | myers3680 -
A simple count (with list if possible) of the external and internal links on a website
Dear Moz Community, I am trying to find a way to easily analyse a website (http://uk.chm-cbd.net) to determine the number of internal and external hyperlinks throughout the (roughly) 490 webpages within the site. It would also be extremely helpful to have a list of all the links with their url. Does anyone know of a tool, methodology to do this? If it requires a lot of programming then that does not help me very much as I am not very computer literate!! So a software to download or an online service would be very helpful. Open Site Explorer seems to be limited, such that I would have to do an analysis on each page within the website. BUT, if anyone knows how I can do using OSE, then please tell me. Thanks in advance to anyone who may be able to help Hugh
Link Explorer | | HLConsulting0 -
How Is a Page Crawled by Moz When Moz Says 'No Links'?
As above, really. I've crawled a new client's site to find the Moz crawler has identified a handful of 404 errors. The Moz crawler says these pages have '0 linking domains', and OSE has no data for these pages. So how are these pages being crawled by Moz and what should I advise my client?
Link Explorer | | xerox4320