I want to recrawl my site manually
-
How can I push up the recrawl date?
-
As part of your campaign you can't; however, if you want to run a crawl check out http://pro.moz.com/tools/crawl-test. It won't update your campaign but is a great way to get information from your site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Syntax for canonical tag for a default page in a sub directory (not subdomain) of a web site?
I'm getting two "no canonical tag" errors for the default page of a sub-directory default page (www and root) - again NOT a subdomain. Since the page is not the root of its own site, I tagged it as -- I have tried without the default.asp, but the error remains. Been doing this for 24 years and don't remember running across this before.
Moz Pro | | dcmike0 -
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
How to track data from old site and new site with the same URL?
We are launching a new site within the next 48 hours. We have already purchased the 30 day trial and we will continue to use this tool once the new site is launched. Just looking for some tips and/or best practices so we can compare the old data vs. the new data moving forward....thank you in advance for your response(s). PB3
Moz Pro | | Issuer_Direct0 -
404 : Errors in crawl report - all pages are listed with index.html on a WordPress site
Hi Mozers, I have recently submitted a website using moz, which has pulled up a second version of every page on the WordPress site as a 404 error with index.html at the end of the URL. e.g Live page URL - http://www.autostemtechnology.com/applications/civil-blasting/ Report page URL - http://www.autostemtechnology.com/applications/civil-blasting/index.html The permalink structure is set as /%postname%/ For some reason the report has listed every page with index.html at the end of the page URL. I have tried a number of redirects in the .htaccess file but doesn't seem to work. Any suggestions will be strongly appreciated. Thanks
Moz Pro | | AmanziDigital0 -
Open Site Explorer backlink results different that MajesticSEO... why?
When checking the number of external links linking to a site, Open Site Explorer and MajesticSEO give completely different results. In one case Open Site Explorer is saying the website has: "49 Total Links" and MajesticSEO is saying the website has: "6,143 External Backlinks (in the last 5 years)" and "497 External Backlinks (in the last 90 days)" Does anyone know why this is?
Moz Pro | | conor10050 -
Adding Disavowed Domains to Open Site Explorer?
Hi, Is there a way to add to the OSE a list of disavowed domains? Also, how often is it refreshed?
Moz Pro | | BeytzNet
I know that the GWMT shows us links on sites that are down for months now. Thanks0 -
I want to create a report of only de duplicate content pages as a csv file so i can create a script to canonicalize them.
I want to create a report of only de duplicate content pages as a csv file so i can create a script to canonicalize them. So i get something like: http://example.com/page1, http://example.com/page2, http://example.com/page3, http://example.com/page4, Because I now have to open each in "Issue: Duplicate Page Content", and this takes a lot of time. The same for duplicate page title.
Moz Pro | | nvs.nim0 -
Bulk OSE Open Site Explorer Tool?
I am trying to do some spring cleaning for a client and hoping to prune any unnecessary domains. Is there a tool that will check, in bulk, these domains through Open Site Explorer? I've looked through all the different Excel spread sheet apps and google doc apps but they are incredibly buggy if they work at all since SEOmoz changed their data limits. Maybe a new tool has been updated in the last few months that I am not aware of. Thanks!
Moz Pro | | kerplow0