Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Unsolved Ooops. Our crawlers are unable to access that URL
-
hello
i have enter my site faroush.com but i got an error
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct
what is problem ? -
I'm encountering the same problem with my website CFMS Bill Status. It seems that both my main website is totally inaccessible to web crawlers. I'm investigated all possible causes such as server configurations, robots.txt restrictions, and security measures. But still haven't found out any clue yet.
-
Have you tried those steps I've suggested earlier? Like checking out settings?
-
Make sure your website can be seen by everyone and isn't blocked by any security settings. Try opening your website from different devices and networks to see if it works. Also, check if your website's settings are stopping search engines from seeing it. Look for any rules that might be blocking search engines in a file called robots.txt. If you find any, make sure they're not stopping search engines from looking at your site.
-
I am getting same error on my website Apne TV
It's been 7 days I am getting same error again and again
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Link Tracking List Error
"I have been maintaining 5 directories of backlinks in the 'Link Tracking List' section for several months. However, I am unable to locate any of these links at this time. Additionally, the link from my MOZ profile is currently broken and redirects to an error page, no to Elche Se Mueve. Given the premium pricing of MOZ's services, these persistent errors are unacceptable."
Moz Pro | | Alberto D.0 -
Unsolved 403 errors for assets which work fine
Hi,
Moz Tools | | Skites2
I am facing some issue with our moz pro account
We have images stored in a s3 buckets eg: https://assets2.hangrr.com/v7/s3/product/151/beige-derby-cotton-suit-mb-2.jpg
Hundreds of such images show up in link opportunities - Top pages tool - As 403 ... But all these images work fine and show status 200. Can't seem to solve this. Thanks.0 -
Unsolved No replies from help@moz.com - one of our IPs is blocked by Cloudflare so we cannot access Moz Community from there
Hi all,
Product Support | | DanielDL
I am a bit at my wits end trying to get some acknowledgement from MOZ. Have had no replies, no ticket auto-replies, no updates on any of the messages I have sent via the Moz Help Form on the website. Literally nothing. I wanted to avoid having to post publicly, but does anyone know how to raise a "technical problem" ticket with MOZ? help@moz.com never replies and the Help Form doesn't generate any kind of ticket. From our main office we get an "Access denied" Error (via Cloudflare) specifically for the Moz Community area. This happened to us in February of this year and has been happening again all through May. After testing ourselves with our IT, we determine that MOZ's Cloudflare account has incorrectly blocked the dedicated IP address specific to the internet connection at our head office. This means that none of our Moz User accounts can access anything related to the Community area in our account when working at the studio. We can only do so when working remotely (ie. some other IP address). This is incredibly frustrating, particularly as we've been on a proper paid MOZ account for many years. And I have sent numerous email requests, messages via the Form, etc., and have never heard back from anyone at all. The problem has been on-going for some time and I guess it is my fault because I tried to politely wait a fair amount of time between each follow-up. Only to realize that, actually, I don't think anyone is monitoring help@moz.com or even the Form submissions, or are even looking into the issue for me. Am hoping this message is seen by someone at Moz so they can let me know what is going on please? Guys..... c'mon.....0 -
What is the best way to treat URLs ending in /?s=
Hi community, I'm going through the list of crawl errors visible in my MOZ dashboard and there's a few URLs ending in /?s= How should I treat these URLs? Redirects? Thanks for any help
Moz Pro | | Easigrass0 -
How to track data from old site and new site with the same URL?
We are launching a new site within the next 48 hours. We have already purchased the 30 day trial and we will continue to use this tool once the new site is launched. Just looking for some tips and/or best practices so we can compare the old data vs. the new data moving forward....thank you in advance for your response(s). PB3
Moz Pro | | Issuer_Direct0 -
Moz & Xenu Link Sleuth unable to crawl a website (403 error)
It could be that I am missing something really obvious however we are getting the following error when we try to use the Moz tool on a client website. (I have read through a few posts on 403 errors but none that appear to be the same problem as this) Moz Result Title 403 : Error Meta Description 403 Forbidden Meta Robots_Not present/empty_ Meta Refresh_Not present/empty_ Xenu Link Sleuth Result Broken links, ordered by link: error code: 403 (forbidden request), linked from page(s): Thanks in advance!
Moz Pro | | ZaddleMarketing0 -
Crawlers crawl weird long urls
I did a crawl start for the first time and i get many errors, but the weird fact is that the crawler tracks duplicate long, not existing urls. For example (to be clear): there is a page: www.website.com/dogs/dog.html but then it is continuing crawling:
Moz Pro | | r.nijkamp
www.website.com/dogs/dog.html
www.website.com/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dogs/dog.html
www.website.com/dogs/dogs/dogs/dogs/dogs/dog.html what can I do about this? Screaming Frog gave me the same issue, so I know it's something with my website0 -
Batch lookup domain authority on list of URL's?
I found this site the describes how to use excel to batch lookup url's using seomoz api. The only problem is the seomoz api times out and returns 1 if I try dragging the formula down the cells which leaves me copying, waiting 5 seconds and copying again. This is basically as slow as manually looking up each url. Does anyone know a workaround?
Moz Pro | | SirSud1