1 week has passed: Crawled pages still N/A
-
Roughly one week ago I went pro, and then I created a campaing for the smallish webshop that I'm employed at, however it doesn't seem to crawl. I've check our visitors log and while we find other bots such as google, bing, yandex and so fourth, seomoz bot hasn't been visible. Perhaps I'm looking for a normal useragent, ohwell, onwards.
While I thought it might take time, as a small test I added a domain that I've owned for sometime but don't really use, that target site is only 17 pages, now this site was crawled almost within the hour, and I realised that our ~5000pages on the main campaing would take some time, but wouldn't the initial 250 pages be crawled by now?
I should add, that I didn't add http:// to the original Campaing, but the one that got crawled I did. I cannot seem to change this myself inorder to spot if that's the problem or not.
Anyone has any ideas, should I just wait or is there something I can activly do to force it to start rolling?
-
Ok, thanks for the suggestion and I'll go ahead and do that
-
There used to be a separate SEOMoz help forum but it looks like that's not available anymore (at least at the moment). Go to the SEOmoz help page and click on the red "Contact Our Help Team" button and ask if they can look into it or whether it's just a time issue.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Why is my comp still beating me??
Hey Friends, I have been doing some analyzing in opensiteexplorer and attached below is an image showing my url (5th position) and my competitors (1st position). Any ideas as to WHY they are still beating me in rankings? I'm sure there could be a number of reasons but the numbers are staggering. Thanks Screen%20Shot%202013-09-16%20at%205.22.46%20PM.png Douu98y
Moz Pro | | brandon070 -
Does a url with no trailing slash (/)need A special redirect to the same url with a trailing slash (/)
I recently moved a website to wordpress which the wordpress default includes the trailing slash (/) after ALL urls. My url structure used to look like: www.example.com/blue-widgets Now it looks like: www.example.com/blue-widgets/ Today I checked the urls using Open Site Explorer and below is what I discovered: www.example.com/blue-widgets returned all my links, authority, etc HOWEVER there is a note that says........."Oh Hey! it looks like that URL redirects to www.example.com/blue-widgets/. Would you like to see data for that URL instead?" When I click on the link to THAT URL I get a note that says_.....NO DATA AVAILABLE FOR THIS URL._ Does this mean that www.example.com/blue-widgets/ really has NO DATA? How do I fix this?
Moz Pro | | webestate0 -
Settings to crawl entire site
Not sure what happened but I started a third campaign yesterday and only 1 pages was crawled, The other two campaigns has 472 and 10K respectively. What is the proper setting to choose in the beginning of campaign setup to have the entire site crawled. Not sure what I did different and I must be reading the instructions incorrectly. Thanks, Don
Moz Pro | | NicheGuy210 -
Pass Page LinkJuice? Or Pass Keyword LinkJuice?
I have a popular page that is not one of the three pages that I am hoping to raise awareness of (want to focus on). The dilemma I am trying to understand is that I really don't want to encourage all the flow from the popular to ONE of my hopeful pages (focus pages). Rather, I want to focus the keyword portions of that page to help the three hopeful pages. So I consider the rel=canonical tag.... err no. rel=canonical would pass ALL my popular page link juice to ONE of my three hopeful pages. What's the best way to pass the keyword link juice relevant to each of my three hopeful pages their, um, portion, of the popular page link juice. I'm white hat by preference. All four pages are good legitimate landing pages, and of course I dread sabotaging the popularity of what is working. Suggestions? Advice?
Moz Pro | | iansears0 -
Why are these pages considered duplicate page content?
A recent crawl diagnostic for a client's website had several new duplicate page content errors. The problem is, I'm not sure where the error comes from since the content in the webpage is different from one another. Here's the pages that SEOMOZ reported to have duplicate page content errors: http://www.imaginet.com.ph/wireless-internet-service-providers-term http://www.imaginet.com.ph/antivirus-term http://www.imaginet.com.ph/berkeley-internet-name-domain http://www.imaginet.com.ph/customer-premises-equipment-term The only thing similar that I see is the headline which says "Glossary Terms Used in this Site" - I hope that the one sentence is the reason for the error. Any input is appreciated as I want to find out the best solution for my client's website errors. Thanks!
Moz Pro | | TheNorthernOffice790 -
Excluding parameters from seomoz crawl?
I'm getting a ton of duplicate content errors because almost all of my pages feature a "print this page" link that adds the parameter "printable=Y" to the URL and displays a plain text version of the same page. Is there any way to exclude these pages from the crawl results?
Moz Pro | | AmericanOutlets0 -
Why is Roger crawling pages that are disallowed in my robots.txt file?
I have specified the following in my robots.txt file: Disallow: /catalog/product_compare/ Yet Roger is crawling these pages = 1,357 errors. Is this a bug or am I missing something in my robots.txt file? Here's one of the URLs that Roger pulled: <colgroup><col width="312"></colgroup>
Moz Pro | | MeltButterySpread
| example.com/catalog/product_compare/add/product/19241/uenc/aHR0cDovL2ZyZXNocHJvZHVjZWNsb3RoZXMuY29tL3RvcHMvYWxsLXRvcHM_cD02/ Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks! |0