Reset Crawler
-
Hello, Does anyone know how to reset the crawler? We recently uploaded our new website and deleted the current campaign but it seems the crawler is caching our old websites data and not the new so every time we try to create a new campaign with the same details, it's just pulling everything from cache it seems. Thanks
-
Hi ForzaHost!
This is Megan from the SEOmoz Help Team. I see that you've opened a ticket that my colleague, Chiaryn, is working with you on. We'll be following up with you in the ticket instead so we can look at your campaign specific information and help figure out what's happening.
Cheers!
-
Hello,
Sorry, I meant SEOMoz's Crawler.
I should have been more clear, I apologize.
Thanks.
-
ForzaHost,
One thing you might try is Fetch As Google. In GWT on dashboard you will see health. Under health click Fetch as Google and the drop down will allow you to choose what you want. (Probably Web). You can insert up to 500 directories (one at time to check the results).
Under Fetch as Google, is Index where you can see when the site was indexed, etc.
Hope this is helpful,
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Ooops. Our crawlers are unable to access that URL
hello
Moz Pro | | ssblawton2533
i have enter my site faroush.com but i got an error
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct
what is problem ?0 -
Our crawler was not able to access the robots.txt file on your site.
Good morning, Yesterday, Moz gave me an error that is wasn't able to find our robots.txt file. However, this is a new occurrence, we've used Moz and its crawling ability many times prior; not sure why the error is happening now. I validated that the redirects and our robots page are operational and nothing is disallowing Roger in our robots.txt. Any advice or guidance would be much appreciated. https://www.agrisupply.com/robots.txt Thank you for your time. -Danny
Moz Pro | | Danny_Gallagher0 -
Why seomoz crawler does not see my snapshot?
I have a web app that uses angularJS and the content is all dynamic (SPA). I have generated snapshots for the pages and write a rule to redirect ( 301) to the snapshot in case of find escaped_fragment in the URL. E.g http://plure.com/#!/imoveis/venda/rj/rio-de-janeiro Request: http://plure.com/?escaped_fragment=/imoveis/venda/rj/rio-de-janeiro is redirected to: http://plure.com/snapshots/imoveis/venda/rj/rio-de-janeiro/ The snapshot is a headless page generated by PhantomJS. Even following the guideline ( https://developers.google.com/webmasters/ajax-crawling/docs/specification) I still can't see my page crawled and I also in SEOMoz I can only see the 1st page crawled with no dynamic content on it. Am I doing something wrong? SEOMoz was supposed to get the snapshot based on same rules of GoogleBot or SEOMoz does not get snapshots?
Moz Pro | | plure_seo0 -
What Does the SEO Moz Crawler Take Into account?
I'm working on a page that has links from some decent pages pointing to it, but a lot of them are low-value blog comments. So I'm pretty sure that its Page Authority is higher than it should be, compared to where it's ranking. Does SEO moz take the type of link into account? i.e. if it's a footer link, blog comment, or forum signature link; this should carry less weight than a link in the content of the page itself, as it does with Google.
Moz Pro | | seanmccauley0 -
Drop in number of Pages crawled by Moz crawler
What would cause a sudden drop in the number of pages crawled/accessed by the Moz crawler? The site has about 600 pages of content. We have multiple campaigns set up in our Pro account to track different keyword campaigns- but all for the same domain. Some show 600+ pages accessed, while others only access 7 pages for the same domain. What could be causing these issues?
Moz Pro | | AllaO0 -
SEOMOZ Crawler unicode bug
for the last couple of weeks the SEOMOZ crawls my homepage only and gets 4xx error for most of the URL's. the crawler have no issues with English url's only with the unicode(Hebrew) ones. this is what is see in the csv export for the crawl (one sample) : http://www.funstuff.co.il/׳ž׳¡׳™׳‘׳×-׳¨׳•׳•׳§׳•׳× 404 text/html; charset=utf-8 you can see that the URL is Gibberish please help.
Moz Pro | | AsafY0 -
Crawler reporting incorrect URLs, resulting in false errors...
The SEOmoz crawler is showing 236 Duplicate Page Titles. When I go in to see what page titles are duplicated I see that the URLs in question are incorrect and read "/about/about/..." instead of just "/about/" The shown page duplicates are the result of the crawler is ending up on the "Page not found" page. Could it be the result of using relative links on the site? Anything I can do to remedy? Thanks for your help! -Frank
Moz Pro | | Clements1 -
Why is the crawler following form action links?
I have an issue with one of my sites where the SEOMoz crawler is following some form action links. It is my understanding that the crawler will ignore these links. Why would it not be ignoring them in certain cases. If you need more detail, please ask. Thanks.
Moz Pro | | AmberHanson0