Initial Crawl Questions
-
Hello.
I just joined and used the Crawl tool. I have many questions and hoping the community can offer some guidance.
1. I received an Excel file with 3k+ records. Is there a friendly online viewer for the Crawl report? Or is the Excel file the only output?
2. Assuming the Excel file is the only output, the Time Crawled is a number (i.e. 1305798581). I have tried changing the field to a date/time format but that did not work. How can I view the field as a normal date/time such as May 15, 2011 14:02?
3. I use the
symbol in my Title. This symbol appears in the output as a few ascii characters. Is that a concern? Should I remove the trademark symbol from my Title?
4. I am using XenForo forum software. All forum threads automatically receive a Title Tag and Meta Description as part of a template. The Crawl Test report shows my Title Tag and Meta Description as blank for many threads. I have looked at the source code of several pages and they all have clean Title tags and I don't understand why the Crawl Report doesn't show them. Any ideas?
5. In some cases the HTTP Status Code field shows a result of "3". Why does that mean?
6. For every URL in the Crawl Report there is an entry in the Referrer field. What exactly is the relationship between these fields? I thought the Crawl Tool would inspect every page on the site. If a page doesn't have a referring page is it missed? What if a page has multiple referring pages? How is that information displayed?
7. Under Google Webmaster Tools > Site Configurations > Settings > Parameter Handling I have the options set as either "Ignore" or "Let Google Decide" for various URL parameters. These are "pages" of my site which should mostly be ignored. For example a forum may have 7 headers, each on of which can be sorted in ascending or descending order. The only page that matters is the initial page. All the rest should be ignored by Google and the Crawl.
Presently there are 11 records for many pages which really should only have one record due to these various sort parameters. Can I configure the crawl so it ignores parameter pages?
I am anxious to get started on my site. I dove into the crawl results and it's just too messy in it's present state for me to pull out any actionable data. Any guidance would be appreciated.
-
Good question. There are a few ways of doing it but I'd advise using a canonical URL on each page to tell the search engines where the content stems from. I had a quick look at XenoForo and this looks relatively simple to do... although make sure you test things thoroughly just in case
-
Thank you very much for the detailed reply.
For #1, I did start my campaign and I will follow up.
2. That worked perfect!
3. Thank you for the information.
4. I realize the problem. It appears the crawler differentiates on the slightest difference in a URL. There are many pages which it shows ending with a slash "/" but those pages are often linked to without an ending slash. The latter pages do not show their Titles nor Meta tags in the crawler report. I presume this is just a crawler issue and would not affect SEO performance.
5. I checked the cell formatting and it is "General" which should be fine. All of the rest of the HTTP Status codes appear normally. What I did notice is that all of the "3" codes refer to attachments. Most attachments show a "3" code, but a few show as 301s.
6. Good to know, thanks for sharing.
7. My main follow up question would be, is there any harm to setting up in robots.txt to disregard all parameter URLs? Basically I want to clean things up, and all of those URLs which are style or sorting variations aren't helpful to any crawler, and those pages shouldn't be indexed.
-
I can help with a few of those:
1. Looks like you're using the crawl tool. If this is for an on-going project, go to http://www.seomoz.org/campaigns and set one up. That way you get a sexy GUI (if you like robots that is) and weekly crawls / rank tracking.
2. That number is almost certainly a UNIX timestamp. To convert it inside excel use the formula below (don't forget to format the cell as a date, otherwise you just see a random number!):
=(A1/86400)+25569+(-5/24)
3. I wouldn't worry about that at all - the crawler converts any non-standard characters to ASCII but, as far as I know, it won't affect your SERP performance.
4. Could you give a few examples of the pages that are affected so I can take a look?
5. That's either a bug or (not too likely but worth checking) an issue with how the numbers are formatted in your spreadsheet. I'd advise opening the file using a text editor to check that the numbers that excel shows match up with the raw format and, if they do, submitting a bug report to the SEOMoz team.
6. The referrer cell tells you how the crawler got to that page. If you don't have any internal links to a page on your site then, chances are, the crawler won't find it. The only caveat to that (and I'm not 100% sure so would need confirmation) is that if the crawl tool uses external linking data. I'd always assumed it didn't but SEOMoz will know where some of your pages are even if you don't link to them internally as external sites will point to them. If that's the case it could be the reason that the referrer cell is blank.
7. Remember that this is SEOMoz crawling your site, not Google. Anything you set in Webmaster tools isn't visible by other search engine spiders such as those used by Bing, Yahoo!, SEOMoz, Majestic, etc. Because of that they won't know how to handle your URL parameters. You're best setting this through either a meta robots tag, robots.txt, or .htaccess (depending on what you're trying to do). Be careful though - if you mess it up there's a strong possibility that you'll end up blocking pages that you want the search engines to be able to access!
Hope that's all helpful... give me a shout if there's anything else.
- Matt
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
In Crawl Diagnostics, length of title element is incorrect
Hey all, It appears the Moz crawler is misreading the number of characters in my website's page titles. It shows 72 characters for the following page's title element: http://giavan.com/products/orange-crystal-chain-necklace-with-drop The page title for this web page is: Orange Crystal Chain Necklace with Drop | Giavan which is 48 characters. As it stands, this page title is displayed at 48 characters in Google SERPs. I am getting "This Element is Too Long" issue on 925 pages, which is just about the entire site. These issues appeared after I added additional Shopify (Liquid) code to the page title. If you inspect the code, you will see title element looks a bit odd with extra spacing and line breaks. What I'd like to know is whether or not it's necessary to rewrite the Shopify code, for SEM purposes. My feeling is that it's okay because the page titles look fine in SERPs but those 925 Moz crawl errors are kind of scary. Thanks for your help!
Moz Pro | | RichAlbanese0 -
Unable to view crawl test
After doing a crawl test i get a download report. It then downloads in csv form and when I go to view it there is a curruption error or just a load of gibberish signs Can I not see the report onsite?
Moz Pro | | hantaah0 -
MOZ Starter Crawl Not Working
Hello, I just added a new subdomain as one of my campaigns on MOZ. The starter crawl report keeps coming back to me with just one page crawled (it should crawl up to 250 pages). I've deleted and added this subdomain three times and it continues to present me with this problem.I've even waited a week for the full crawl report but that also showed just one page crawled. Does anybody know why this is happening? Thanks!
Moz Pro | | jampaper0 -
Does the Crawl Diagnosis - Duplicate Page Content account for a canonical meta tags?
I see the same page listed 3 time (with different query params). But on each I have a meta tag pointing to the correct canonical url. By still seeing all three listed, does that mean there is an error with my meta tag?
Moz Pro | | Simantel0 -
How to creat link building,why seomoz has no such tool? also why it not crawl on daily basis?
how to creat link building? why seomoz has no such tool? also why it not crawl on daily basis?
Moz Pro | | mrgunii0 -
Tools that crawl 2 million page sites
Our site is about 2million pages deep, 50% of which is stale content. Yes, I know - OMG #unhygienic. Even if we get approval to get rid of half of it. SEOMoz Pro Elite only crawls 20k deep - what can i do to crawl and diagnose the whole site. Are there any tools anyone can suggest. SEOMoz??
Moz Pro | | ilhaam0 -
Pages Crawled: 250 | Limit: 250
One of my campaigns says: Pages Crawled: 250 | Limit: 250 Is this because it's new and the limit will go up to 10,000 after the crawl is complete? I have a pro account, 4 other campaigns running and should be allowed 50,000 pages in total
Moz Pro | | MirandaP0 -
A suggestion to help with linkscape crawling and data processing
Since you guys are understandably struggling with crawling and processing the sheer number of URLs and links, I came up with this idea: In a similar way to how SETI@Home (is that still a thing? Google says yes: http://setiathome.ssl.berkeley.edu/) works, could SEOmoz use distributed computing amongst SEO moz users to help with the data processing? Would people be happy to offer up their idle processor time and (optionally) internet connections to get more accurate, broader data? Are there enough users of the data to make distributed computing worthwhile? Perhaps those who crunched the most data each month could receive moz points or a free month of Pro. I have submitted this as a suggestion here:
Moz Pro | | seanmccauley
http://seomoz.zendesk.com/entries/20458998-crowd-source-linkscape-data-processing-and-crawling-in-a-similar-way-to-seti-home1