Duplicate Page Titles and Content
-
The SeoMoz crawler has found many pages like this on my site with /?Letter=Letter, e.g. http://www.johnsearles.com/metal-art-tiles/?D=A. I believe it is finding multiple caches of a page and identifying them as duplicates. Is there any way to screen out these multiple cache results?
-
I think I figured out what to add to Robots.txt to screen out any url with an '?' in it. I believe these ?urls are session IDs for Urls. I'll see what Roger-bot does next time it crawls my site.
Disallow: /*?
-
Hey John,
My apologies for any issues that you are experiencing with our service. I would definitely like to address any other issues, besides this one, that you may be experiencing. You could either respond to this Q&A thread or submit a private customer support ticket to our help team. If you go to our help hub (www.seomoz.org/help) you can easily submit a ticket by clicking the contact help team button.
As for your duplicate content question, it is important to know that any time the same content is found on more than one URL that it is considered duplicate content. WordPress is a good example where duplicate is often found but can be easily addressed.
In WordPress you could have your homepage www.domain.com and an author page www.domain.com/author/authorname. If your blog only has one author though this author page is going to be identical to your homepage and the result is your site having duplicate content. There are a few ways to resolve this though with the most popular being simply preventing access to the author page and redirecting it back to the homepage. This would prevent other sites from linking to these duplicate pages and they would instead link directly to the homepage.
Another option would be to use meta robots noindex and follow tags on the duplicate page, in this case the author page. This would prevent the page from being indexed but will still allow the links on the page to be found and crawled. You can also prevent access to these pages in your robots.txt file and our crawler can be isolated by using the user-agent rogerbot.
I hope that makes sense.
Let me know if you have any additional questions or concerns.
Kenny
-
Thanks Guy. I was thinking of subscribing to SeoMoz but the site reports have been less than useful. This is just one of 5 issues I've found.
-
So far no. Until they fix that little error you can use Google Webmaster Tool's to double check for real duplicate content.
The spider is seeing whatever.php?var=1 as a different page because some sites just use index.php?p=103 to be a page and p=102 another page. While others use the variables in the URL on the same page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
API for On Page tool
I'm looking for a tool similar to On Page Grader (Moz) or Focus Keyword (Yoast) with API. We are building out or internal CRM system. Even though none of these tools can replace manual on page analysis, it will be used as a metric and to catch human mistakes.
Moz Pro | | OscarSE0 -
How do I find out which pages are being indexed on my site and which are not?
Hi, I doing my first technical audit on my site. I am learning how to do an audit as i go and am a lost. I know some page won't be indexed but how do I: 1. Check the site for all pages, both indexed and not indexed 2. Run a report to show indexed pages only (i am presuming i can do this via screaming Frog or webmaster tool) 3. I can do a comparison between the two list and work out which pages are not being indexed. I'll then need to figure out way. I'll cross this bridge once i get to it Thanks Ben
Moz Pro | | benjmoz0 -
Duplicate titles reported with canonical
Hi Mozzers, In the reports it is saying that I have some duplicate content and titles even though there is a canonical tag on them, is anyone else getting this?
Moz Pro | | KarlBantleman0 -
Why does Crawl Diagnostics report this as duplicate content?
Hi guys, we've been addressing a duplicate content problem on our site over the past few weeks. Lately, we've implemented rel canonical tags in various parts of our ecommerce store, over time, and observing the effects by both tracking changes in SEOMoz and Websmater tools. Although our duplicate content errors are definitely decreasing, I can't help but wonder why some URLs are still being flagged with duplicate content by our SEOmoz crawler. Here's an example, taken directly from our Crawl Diagnostics Report: URL with 4 Duplicate Content errors:
Moz Pro | | yacpro13
/safety-lights.html Duplicate content URLs:
/safety-lights.html ?cat=78&price=-100
/safety-lights.html?cat=78&dir=desc&order=position /safety-lights.html?cat=78 /safety-lights.html?manufacturer=514 What I don't understand, is all of the URLS with URL parameters have a rel canonical tag pointing to the 'real' URL
/safety-lights.html So why is SEOMoz crawler still flagging this as duplicate content?0 -
How do I scan down to 10000 pages?
Hi very new here I have set up 5 campaigns, all of fairly large sites. It appears seomoz has scanned 4 of them down to 250 and 1 down to 10000. the one a really want to see down to 10000, my own site is the one I started scanning first well over a week ago. How do I get seomoz to scan further? Thanks
Moz Pro | | First-VehicleLeasing0 -
On Page Analysis and Grading
I received an email that my on page analysis for my campaigns were completed. But when I click on the link there are no grades there. What does that mean? Another question on this topic....when your campaign is graded are pages graded on all the keywords in the campaign or is each keyword graded invidividually? Thanks!
Moz Pro | | Confections0 -
Duplicate page error from SEOmoz
SEOmoz's Crawl Diagnostics is complaining about a duplicate page error. I'm trying to use a rel=canonical but maybe I'm not doing it right. This page is the original, definitive version of the content: https://www.borntosell.com/covered-call-newsletter/sent-2011-10-01 This page is an alias that points to it (each month the alias is changed to point to the then current issue): https://www.borntosell.com/covered-call-newsletter/latest-issue The alias page above contains this tag (which is also updated each month when a new issue comes out) in the section: Is that not correct? Is the https (vs http) messing something up? Thanks!
Moz Pro | | scanlin0 -
Status 404-pages
Hi all, One of my websites has been crawled by SEOmoz this week. The crawl showed me 3 errors: 1 missing title and 2 client errors (4XX). One of these client errors is the 404-page itself! What's your suggestion about this error? Should a 404-page have the 404 http status? I'd like to hear your opinion about this one! Thanks all!
Moz Pro | | Partouter0