Cant download my crawl csv
-
When I click on the [download csv] in my crawl campaign section nothing happens.
-
I can't download craw results either using Firefox or Chrome on Fedora 19 (Linux).
-
Hi Bryon
That is very strange! We have mostly seen Firefox work opposed to Chrome and other browsers. There may be an issue with installed plug-ins, extensions, or security settings that could be blocking our database. I did have one member re-install Firefox for similar issue that worked. Maybe that could work for you as well.
-
David.. I was able to use Chrome and download it. Im not sure why Firefox was not able to?
Thanks for you help
-
Hi Bryon
I am able to download your report in Firefox ver. 24, I can reply to you with a copy if you can submit a support request at http://moz.com/help/contact
Set the subject to attn: david and I will grab the ticket and send it over and we can continue our correspondence through email.
-
I am still not able to get the button to work. what OS and browser are you using?
-
Hello David.
Its not giving me a option to block or allow. When I click on the button, nothing happens. Is this how the button would act if what you say is correct?
-
Hi Bryan
I am so sorry you are not able to download your crawl report. We are seeing a few members not able to access parts of our site due to browser security settings, anti-virus, and or firewalls. If you have a firewall you will want to make sure to allow all traffic from *.moz.com as we have a few servers handling different parts of our web app. For reports they will be downloaded from vanguard.moz.com which I believe is being blocked. I was able to download the report using your credentials.
Let me know if you are able to isolate any of the above areas.
Hope it works!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Standard Syntax in robots.txt doesn't prevent Moz bot from crawling
A client is getting many false positive site crawl errors for things like duplicate titles and duplicate content on pages that include /tag/ in the URL. An example is https://needquest.com/place_tag/autism-spectrum-disorder/page/4/ To resolve this we have set up a disallow statement in the robots.txt file that says
Getting Started | | btreloar
Disallow: /page/ For some reason this appears not to work, as the site crawl errors continue to list pages like this. Does anyone understand why that would be and what we need to do to properly disallow crawling these pages?0 -
How to have MOZ site crawl pre-launch
Hi, Our new website is about to launch. We would love to have moz.com SITE CRAWL our site before launch. For issues like "missing meta description" and everything else that moz.com checks. We would love to do it before we launch. The new site is currently on a different domain than our live site. example.com <-- this is our live site. new-site.com <-- this is our "staging" server with the new site. We have a long running campaign for example.com Do we need to create a new campain for new-site.com ? Or is there some other simpler way? When we launch we will switch the site from new-site.com to example.com .. example.com will be the address for the new site.. Any ideas or suggestions? best practices? edit Forgot to say thank you for your help and input 🙂
Getting Started | | tandvarden0 -
Crawl issues, how to see a referring link?
Hi There, We've got two crawl issues for pages that don't exist (and never existed). The links are strange and judging by the code in them, appear to be coming from our own CMS. How can we see which pages the links are on in Moz? Cheers Ben
Getting Started | | cmscss0 -
Moz could not crawl my httpS website
Hi, we have a website with HTTPS, moz could not crawl it and we get "902 : Network errors prevented crawler from contacting server for page" while in logs we see moz robot access but fail after some seconds, what could be the problem, while moz can access site when it is without httpS | 902 : Network errors prevented crawler from contacting server for page. |
Getting Started | | Hamedkhorasani10 -
My website does not allow all crawler to crawl, Now my question is that whether i need to give permission to moz crawler if yes then whaat is moz bot name?
My website does not permit all crawler to crawl website. Whether ii need to give permission to moz bot to crawl website or not? If yes what is the moz bot name?
Getting Started | | irteam0 -
Why wont rogerbot crawl my page?
How can I find out why rogerbot won't crawl an individual page I give it to crawl for page-grader? Google, bing, yahoo all crawl pages just fine, but I put in one of the internal pages fo page-grader to check for keywords and it gave me an F -- it isn't crawling the page because the keyword IS in the title and it says it isn't. How do I diagnose the problem?
Getting Started | | friendoffood0 -
How to get moz to crawl a staging domain that is blocked by robots.txt
Is it possible to get Moz to do a crawl report on a domain blocked by robots.txt and actually display all errors instead of only one saying the domain was blocket in robots.txt? Anything i can add to robots.txt to make moz able to do the crawl report but still hinder google from crawling a staging domain?
Getting Started | | classifiedtech0 -
What are the solutions for Crawl Diagnostics?
Hi Mozers, I am pretty new to SEO and wanted to know what are the solutions for the various errors reported in the crawl diagnostics and if this question has been asked, please guide me in the right directions. Following are queries specific to my site just need help with these 2 only: 1. Error 404: (About 60 errors) : These are for all the PA 1 links and are no longer in the server, what do i do with these? 2. Duplicate Page Content and Title ( About 5000) : Most of these are automatic URL;s that are generated when someone fills any info on our website. What do I do with these URL;s. they are for example: _www.abc.fr/signup.php?_id=001 and then www.abc.fr/signup.php?id=002 and so on. What do I need to do and how? Plzz. Any help would be highly appreciated. I have read a lot on the forums about duplicate content but dont know how to implement this in my case, please advise. Thanks in advance. CY
Getting Started | | Abhi81870