Cant download my crawl csv
-
When I click on the [download csv] in my crawl campaign section nothing happens.
-
I can't download craw results either using Firefox or Chrome on Fedora 19 (Linux).
-
Hi Bryon
That is very strange! We have mostly seen Firefox work opposed to Chrome and other browsers. There may be an issue with installed plug-ins, extensions, or security settings that could be blocking our database. I did have one member re-install Firefox for similar issue that worked. Maybe that could work for you as well.
-
David.. I was able to use Chrome and download it. Im not sure why Firefox was not able to?
Thanks for you help
-
Hi Bryon
I am able to download your report in Firefox ver. 24, I can reply to you with a copy if you can submit a support request at http://moz.com/help/contact
Set the subject to attn: david and I will grab the ticket and send it over and we can continue our correspondence through email.
-
I am still not able to get the button to work. what OS and browser are you using?
-
Hello David.
Its not giving me a option to block or allow. When I click on the button, nothing happens. Is this how the button would act if what you say is correct?
-
Hi Bryan
I am so sorry you are not able to download your crawl report. We are seeing a few members not able to access parts of our site due to browser security settings, anti-virus, and or firewalls. If you have a firewall you will want to make sure to allow all traffic from *.moz.com as we have a few servers handling different parts of our web app. For reports they will be downloaded from vanguard.moz.com which I believe is being blocked. I was able to download the report using your credentials.
Let me know if you are able to isolate any of the above areas.
Hope it works!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz not able to crawl our site - any advice?
When I try and crawl our site through Moz it gives this message: Moz was unable to crawl your site on Aug 7, 2019. Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster. I have been through all the help and doesn't seem to be any issues. You can check the site and robots.txt here: https://myfamilyclub.co.uk/robots.txt. Anyone got any advice on where I could go to get this sorted?
Getting Started | | MyFamilClubLtd1 -
Crawling issue
Hi, I have to set up a campaign for a webshop. This webshop is a subdomain itself. First question: The two subfolders I need to track are /nl_BE and /fr_BE. What is the best way to handle this? Shall I set up two different campaigns for each subfolder, or shall I just make one campaign and add tags to keywords? **Second question: **it seems like Moz can't crawl enough pages. There are no disallows in the robots.txt. Should I try putting the following at the top into my robots.txt? User-agent: rogerbot
Getting Started | | Mat_C
Disallow: Or is it because I want to crawl only a subdomain that it doesn't work? Thanks0 -
Moz only crawling one page of a campaign, please help
Today I set up a new campaign for a client, however the crawl has only found the home page and is saying that the URL is unavailable. The site is definitely live and the URL is correct. I have set up the campaign 3 times one with the full address (http://www.) one with www. and with just the domain name. All three of these have come page with one page crawled and "unavailable" above the URL. It is picking up the crawl issues on the page and showing domain authority but I don't know why it's not crawling other pages. Prior to setting up the campaign I did a site crawl and Moz found everything then, so I don't know why it isn't now. Please help. Thanks
Getting Started | | Wrapped0 -
How to have MOZ site crawl pre-launch
Hi, Our new website is about to launch. We would love to have moz.com SITE CRAWL our site before launch. For issues like "missing meta description" and everything else that moz.com checks. We would love to do it before we launch. The new site is currently on a different domain than our live site. example.com <-- this is our live site. new-site.com <-- this is our "staging" server with the new site. We have a long running campaign for example.com Do we need to create a new campain for new-site.com ? Or is there some other simpler way? When we launch we will switch the site from new-site.com to example.com .. example.com will be the address for the new site.. Any ideas or suggestions? best practices? edit Forgot to say thank you for your help and input 🙂
Getting Started | | tandvarden0 -
How do I update the crawl issues & Notifications?
I have a list of errors, most relating to missing meta descriptions. I have added a meta description to a page, visited the site and viewed the source, and the meta description is now there. When I go to analyze issues, the report it gives back for the link contains the same missing meta description as previously. How do I get it to update and realize the issue has been fixed?
Getting Started | | ETGg0 -
Where to download SERP report
We signed up but seems MOZ interface is very confusing. it sent me email that SERP report is ready -- but where ?? and how to see ?????
Getting Started | | Vinayj0 -
What are the solutions for Crawl Diagnostics?
Hi Mozers, I am pretty new to SEO and wanted to know what are the solutions for the various errors reported in the crawl diagnostics and if this question has been asked, please guide me in the right directions. Following are queries specific to my site just need help with these 2 only: 1. Error 404: (About 60 errors) : These are for all the PA 1 links and are no longer in the server, what do i do with these? 2. Duplicate Page Content and Title ( About 5000) : Most of these are automatic URL;s that are generated when someone fills any info on our website. What do I do with these URL;s. they are for example: _www.abc.fr/signup.php?_id=001 and then www.abc.fr/signup.php?id=002 and so on. What do I need to do and how? Plzz. Any help would be highly appreciated. I have read a lot on the forums about duplicate content but dont know how to implement this in my case, please advise. Thanks in advance. CY
Getting Started | | Abhi81870