Duplicate Page Titles and Content
-
The SeoMoz crawler has found many pages like this on my site with /?Letter=Letter, e.g. http://www.johnsearles.com/metal-art-tiles/?D=A. I believe it is finding multiple caches of a page and identifying them as duplicates. Is there any way to screen out these multiple cache results?
-
I think I figured out what to add to Robots.txt to screen out any url with an '?' in it. I believe these ?urls are session IDs for Urls. I'll see what Roger-bot does next time it crawls my site.
Disallow: /*?
-
Hey John,
My apologies for any issues that you are experiencing with our service. I would definitely like to address any other issues, besides this one, that you may be experiencing. You could either respond to this Q&A thread or submit a private customer support ticket to our help team. If you go to our help hub (www.seomoz.org/help) you can easily submit a ticket by clicking the contact help team button.
As for your duplicate content question, it is important to know that any time the same content is found on more than one URL that it is considered duplicate content. WordPress is a good example where duplicate is often found but can be easily addressed.
In WordPress you could have your homepage www.domain.com and an author page www.domain.com/author/authorname. If your blog only has one author though this author page is going to be identical to your homepage and the result is your site having duplicate content. There are a few ways to resolve this though with the most popular being simply preventing access to the author page and redirecting it back to the homepage. This would prevent other sites from linking to these duplicate pages and they would instead link directly to the homepage.
Another option would be to use meta robots noindex and follow tags on the duplicate page, in this case the author page. This would prevent the page from being indexed but will still allow the links on the page to be found and crawled. You can also prevent access to these pages in your robots.txt file and our crawler can be isolated by using the user-agent rogerbot.
I hope that makes sense.
Let me know if you have any additional questions or concerns.
Kenny
-
Thanks Guy. I was thinking of subscribing to SeoMoz but the site reports have been less than useful. This is just one of 5 issues I've found.
-
So far no. Until they fix that little error you can use Google Webmaster Tool's to double check for real duplicate content.
The spider is seeing whatever.php?var=1 as a different page because some sites just use index.php?p=103 to be a page and p=102 another page. While others use the variables in the URL on the same page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content nightmare
Hey Moz Community I ran a crawl test, and there is a lot of duplicate content but I cannot work out why. It seems that when I publish a post secondary urls are being created depending on some tags and categories. Or at least, that is what it looks like. I don't know why this is happening, nor do I know if I need to do anything about it. Help? Please.
Moz Pro | | MobileDay0 -
Crawl Diagnostics saids a page is linking but I can't find the link on the page.
Hi I have just got my first Crawl Diagnostics report and I have a questions. It saids that this page: http://goo.gl/8py9wj links to http://goo.gl/Uc7qKq which is a 404. I can't recognize the URL on the page which is a 404 and when searching in the code I can't find the %7Blink%7D in the URL which gives the problems. I hope you can help me to understand what triggers it 🙂
Moz Pro | | SebastianThode0 -
Can't figure out why some of my pages are duplicate content
Within the crawl diagnostics area I'm getting duplicate page content issues on several pages. I don't know why, would anyone be able to tell me how these links are duplicate so I can fix them? http://www.sagenews.ca/Column.asp?id=3010 http://www.sagenews.ca/Column.asp?id=2808 http://www.sagenews.ca/Column.asp?id=2998 http://www.sagenews.ca/Column.asp?id=2837 http://www.sagenews.ca/Column.asp?id=2981
Moz Pro | | INMCA0 -
Duplicate page report
We ran a CSV spreadsheet of our crawl diagnostics related to duplicate URLS' after waiting 5 days with no response to how Rogerbot can be made to filter. My IT lead tells me he thinks the label on the spreadsheet is showing “duplicate URLs”, and that is – literally – what the spreadsheet is showing. It thinks that a database ID number is the only valid part of a URL. To replicate: Just filter the spreadsheet for any number that you see on the page. For example, filtering for 1793 gives us the following result: | URL http://truthbook.com/faq/dsp_viewFAQ.cfm?faqID=1793 http://truthbook.com/index.cfm?linkID=1793 http://truthbook.com/index.cfm?linkID=1793&pf=true http://www.truthbook.com/blogs/dsp_viewBlogEntry.cfm?blogentryID=1793 http://www.truthbook.com/index.cfm?linkID=1793 | There are a couple of problems with the above: 1. It gives the www result, as well as the non-www result. 2. It is seeing the print version as a duplicate (&pf=true) but these are blocked from Google via the noindex header tag. 3. It thinks that different sections of the website with the same ID number the same thing (faq / blogs / pages) In short: this particular report tell us nothing at all. I am trying to get a perspective from someone at SEOMoz to determine if he is reading the result correctly or there is something he is missing? Please help. Jim
Moz Pro | | jimmyzig0 -
Unable to crawl pages
Hi, I am trying to set up a campaign for our website - www.salvationarmy.org.au however, I can't seem to get a scan of more than three pages. I have tried the following: www.salvationarmy.org.au (only 2 pages) www.salvationarmy.org.au/home (only 1 page) salvationarmy.org.au (only 3 pages) There is a geo IP redirect on www.salvationarmy.org.au but the second domain listed above should resolve the full site. I'm a newbie to SEOmoz so any help would be appreciated! Thanks, Mel
Moz Pro | | KingPings0 -
How can I prevent errors of duplicate page content generated by my tags from my wordpress on-site blog platform?
When I add meta data and a canonical reference to my blog tags for my on-site blog which works using a wordpress.org template, Roger generates errors of duplicate content. How can I avoid this problem? I want to use up to 5 tags per post, with the same canonical reference and each campaign scan generates errors/warnings for me!
Moz Pro | | ZoeAlexander0 -
What is the best method to solve duplicate page content?
The issue I am having is an overwhelmingly large number of pages on cafecartel.com show that they have duplicate page content. But when I check the errors on SEOmoz it shows that the duplicate content is from www.cafecartel.com not cafecartel.com. So first of all, does this mean that there are two sites? and is this a problem I can fix easily? (i.e. redirecting the URL and deleting the extra pages) Is this going to make all other SEO useless due to the fact that it shows that nearly every page has duplicate page content? Or am I just completely reading the data wrong?
Moz Pro | | MarkP_0 -
How to Fix the Errors with Duplicate Title or Content?
The latest Crawl Diagnostic has found 160 Errors on my site.
Moz Pro | | hanmark
And my error is, that the same content or title is used on two different! pages:
on both my root domain (han-mark.com) and the www subdomain. What does it matter (with or without www)? How serious is that error? Do I need to fix all the errors (and hundreds of warnings too)? What's the best practice? Is there any Guide on how to do it
or Tools for doing it the fast way? Viggo Joergensen0