"Does not respond to web requests" error
-
When trying to set up a new campaign I get the following message:
"Roger has detected a problem: We have detected that the domain www.chicagofinancialadvisers.com does not respond to web requests. Using this domain, we will be unable to crawl your site or present accurate SERP information."Can someone please tell me what I need to do on my site to make this work? I haven't seen this before and have done many other campaigns. Thanks a lot!
-
Thanks Ryan. That worked!
-
Hello Brien,
I noticed your robots.txt file currently shows as
**User-agent: ***
That is not a properly formatted robots.txt file. You need to add an additional line. Try adjusting as follows:
**User-agent: ***
** Disallow:**
This change MAY resolve the issue. If not, I would suggest checking your server firewall settings to discover if any ports or crawlers are blocked.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Since July 1, we've had a HUGE jump in errors on our weekly crawl. We don't think anything has changed on our website. Has MOZ changed something that would account for a large leap in duplicate content and duplicate title errors?
Our error report went from 1,900 to 18,000 in one swoop, starting right around the first of July. The errors are duplicate content and duplicate title, as if it does not see our 301 redirects. Any insights?
Moz Pro | | KristyFord0 -
Having 1 page crawl error on 2 sites
Help! A few weeks back, my dev team did some "changes" (that I don't know anything about), but ever since then, my Moz crawl has only shown one page for either http://betamerica.com or http://fanex.com. Moz service was helpful in talking about a redirect loop that existed, and I asked my team to fix it, which it looks to me like they have. Still, 1 page. I used SEO Book's spider tool and it also only sees 1 page, and sees the sites as http://https://betamerica.com (for example), which is just weird. I don't know enough about HT Access or server stuff to figure out what's going on, so if someone can help me figure that out, I'd appreciate it.
Moz Pro | | BetAmerica0 -
Metric "Total Links" can somebody explain this metric to me?
Dear colleagues, Who can explain the following to me? Subdomain metrics - Total links The total links is huge compared to the sum of internal and external links. I do not understand this metric. Can somebody help me to explain this the metrich "total links" I have to present these metrics to my customer and do not want to have "don't know" as an answer 😉 Thanks, Alain Nijholt BMC Internet Marketing
Moz Pro | | bmcinternetmarketing0 -
Increase of 404 error after change of encoding
Hello, We just have launch a new version of our website with a new utf-8 encoding. Thing is, we use comma as a separator and since the new website went live, I have a massive increase of 404 error of comma-encoded URL. Here is an example : http://web.bons-de-reduction.com/annuaire%2C321-sticker%2Csite%2Cpromotions%2C5941.html instead of : http://web.bons-de-reduction.com/annuaire,321-sticker,site,promotions,5941.html I check with Screaming Frog SEO and Xenu, I can't manage to find any encoded URL. Is anyone have a clue on how to fix that ? Thanks
Moz Pro | | RetailMeNotFr0 -
SEOMoz reports and 404 errors
My SEOMoz report shows a 404 error, found today for this url: http://globalheavyhaul.com/google.com i do not have this anchor text anywhere on my website. How did Roger figure out that somebody looked for that page? Do I need to worry about 404 errors that are the result of user mistakes, instead of actual bad links?
Moz Pro | | FreightBoy0 -
Why does Linkscap API request hang while extracting data ?
Hi, I am using LinkScape API to get follow and nofollow links . I use cron to get data for each url of sitemap.xml. However while cron is running, the extraction of data hangs on some pages which i later need to delete manually for re starting the execution. Do anyone have any idea why this is happening ? How can i ignore such pages ?
Moz Pro | | Ravi_Pathak0 -
20000 site errors and 10000 pages crawled.
I have recently built an e-commerce website for the company I work at. Its built on opencart. Say for example we have a chair for sale. The url will be: www.domain.com/best-offers/cool-chair Thats fine, seomoz is crawling them all fine and reporting any errors under them url great. On each product listing we have several options and zoom options (allows the user to zoom in to the image to get a more detailed look). When a different zoom type is selected it adds on to the url, so for example: www.domain.com/best-offers/cool-chair?zoom=1 and there are 3 different zoom types. So effectively its taking for urls as different when in fact they are all one url. and Seomoz has interpreted it this way, and crawled 10000 pages(it thinks exist because of this) and thrown up 20000 errors. Does anyone have any idea how to solve this?
Moz Pro | | CompleteOffice0 -
How Do I Interpret Data From "Competitive Link Finder" Search Results?
Ok, I'm a beginner here. A few basic questions: I ran a "competitive link finder" search. 1. What does "Subdomain mR" mean? 2. What does Subdomain mT mean? 3. What do the scores in each column mean? Example 8.7 under Subdomain mR. 4. How do I interpret this? thanks for your help! Dan Castro
Moz Pro | | DanManCastro1