Crawl diagnostics incorrectly reporting duplicate page titles
-
Hi guys,
I have a question in regards to the duplicate page titles being reported in my crawl diagnostics. It appears that the URL parameter "?ctm" is causing the crawler to think that duplicate pages exist. In GWT, we've specified to use the representative URL when that parameter is used. It appears to be working, since when I search site:http://www.causes.com/about?ctm=home, I am served a single search result for www.causes.com/about. That begs the question, why is the SEOMoz crawler saying there is duplicate page titles when Google isn't (doesn't appear under the HTML improvements for duplicate page titles)? A canonical URL is not used for this page so I'm assuming that may be one reason why. The only other thing I can think of is that Google's crawler is simply "smarter" than the Moz crawler (no offense, you guys put out an awesome product!).
Any help is greatly appreciated and I'm looking forward to being an active participant in the Q&A community!
Cheers,
Brad
-
Glad I could help, Bradley. Let us know if you need help with anything else.
-
Thanks for the thorough response Chiaryn! I figured as much but wanted to make sure I wasn't overlooking anything.
-
Hey Bradley,
You're right; Google's crawler is way more sophisticated than ours is because they have a lot more resources, be they engineers or finances, to pour into their crawler. We think our crawl provides tremendous value and is an excellent way to discover and understand the architecture of your site at scale, but it's not that strange that it wouldn't line up with exactly what a site: search reveals. We also don't always know how Google (or other search engine bots) is going to consider a set of pages, so we would rather be safe than sorry with the data we provide.
Since the page http://www.causes.com/about?ctm=home is linked to from another page on your site (www.causes.com) and resolves with a 200 status, our crawler sees it as an individual page and won't associate it with the main /about page. Instead, it just compares the code and content with the other pages we've crawled and reports back when we find duplicates.
I hope this helps clear things up. Please let me know if you have any other questions.
Chiaryn
Help Team Ninja
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I fix duplicate title issues?
I have a sub domain that isn't even on our own site but it's resulting in a lot of errors in Moz for duplicate content, as shown here: http://cl.ly/1R081v0K0e2N. Would this affect our ranking or is it simply just errors within Moz? What measures could I take to make sure that Moz or Google doesn't associate our site with these errors? Would I have to noindex in the htaccess file for the sub domain?
Moz Pro | | MMAffiliate0 -
Duplicate Title
Hi, I am getting a "duplicate title" error for all the sites I make and I am not sure why - it's only for my homepage. www.carolynnescottages.com.au is one for example. It picks up the url www.carolynnescottages.com.au and also www.carolynnescottages.com.au/index The index page is the homepage. Any help would be greatly appreciated. Also is there some tutorials where I can learn how to use each of the tools in seomoz properly? videos? Thanks again. Tammy
Moz Pro | | tammyc0 -
Is it possible to exclude pages from Crawl Diagnostic?
I like the crawl diagnostic but it shows many errors due to a forum that I have. I don't care about the SEO value of this forum and would like to exclude any pages in the /forum/ directory. Is it possible to add exclusions to the crawl diagnostic tool?
Moz Pro | | wfernley2 -
Crawl Diagnostics Report Lacks Information
When I look at the crawl diagnostics, SEOMoz tells me there are 404 errors. This is understandable, because some pages were removed. What this report doesn't tell me is how those pages were discovered. This is a very important piece of information, because it would tell me there are links pointing to those pages, either internal or external. I believe the internal links have been removed. If the report told me how if found the link, I would be able to take immediate action. Without that information, I have to go so a lot of investigation. And when you have a million pages, that isn't easy. Some possibilities: The crawler remembered the page from the previous crawl. There was a link from an index page - i.e. it is in the database still There was an individual link from another story - so now there are broken links Ditto, but it in on a static index page The link was from an external source - I need to make a redirect Am I missing something, or is this a feature the SEO Moz crawler doesn't have yet? What can I do (other than check all my pages) to discover this?
Moz Pro | | loopyal0 -
Too Many On-Page Links: Crawl Diag vs On-Page
I've got a site I'm optimizing that has thousands of 'too many links on-page' warnings from the SeoMoz crawl diagnostic. I've been in there and realized that there are indeed, the rent is too damned high, and it's due to a header/left/footer category menu that's repeating itself. So I changed these links to NoFollow, cutting my total links by about 50 per page. I was too impatient to wait for a new crawl, so I used the On Page Reports to see if anything would come up on the Internal Link Count/External Link Count factors, and nothing did. However, the crawl (eventually) came back with the same warning. I looked at the link Count in the crawl details, and realized that it's basically counting every single '<a href'="" on="" the="" page.="" because="" of="" this,="" i="" guess="" my="" questions="" are="" twofold:<="" p=""></a> <a href'="" on="" the="" page.="" because="" of="" this,="" i="" guess="" my="" questions="" are="" twofold:<="" p="">1. Is no-follow a valid strategy to reduce link count for a page? (Obviously not for SeoMoz crawler, but for Google)</a> <a href'="" on="" the="" page.="" because="" of="" this,="" i="" guess="" my="" questions="" are="" twofold:<="" p="">2. What metric does the On-Page Report use to determine if there are too many Internal/External links? Apologies if this has been asked, the search didn't seem to come up with anything specific to this.</a>
Moz Pro | | icecarats0 -
0 values reported in anchor text distribution report
Can anyone explain the rows of 0 values for the majoity of anchor text reported in the Anchor Text Distribution section of OSE ? e.g I ran a report for a website which has around 2,500 inbound links and when I looked at the CSV file, there are about 1,500 anchor texts showing as 0. I'm sure there's either a simple explanation for this or its just a glitch. Can anyone help?
Moz Pro | | Websensejim0 -
Why is Roger crawling pages that are disallowed in my robots.txt file?
I have specified the following in my robots.txt file: Disallow: /catalog/product_compare/ Yet Roger is crawling these pages = 1,357 errors. Is this a bug or am I missing something in my robots.txt file? Here's one of the URLs that Roger pulled: <colgroup><col width="312"></colgroup>
Moz Pro | | MeltButterySpread
| example.com/catalog/product_compare/add/product/19241/uenc/aHR0cDovL2ZyZXNocHJvZHVjZWNsb3RoZXMuY29tL3RvcHMvYWxsLXRvcHM_cD02/ Please let me know if my problem is in robots.txt or if Roger spaced this one. Thanks! |0 -
Can you help me get started using the crawl diagnostics report?
After getting the crawl diagnostics report for the first time my boss and I looked over it and we have tried to fix the problems but we are stumped.I have tried and watched videos , read books, etc.. but have found nothing to help. I need assistance getting started on improving my website. Can you help?
Moz Pro | | WVInjuryLawyer0