Duplicate Page Titles and Content
-
The SeoMoz crawler has found many pages like this on my site with /?Letter=Letter, e.g. http://www.johnsearles.com/metal-art-tiles/?D=A. I believe it is finding multiple caches of a page and identifying them as duplicates. Is there any way to screen out these multiple cache results?
-
I think I figured out what to add to Robots.txt to screen out any url with an '?' in it. I believe these ?urls are session IDs for Urls. I'll see what Roger-bot does next time it crawls my site.
Disallow: /*?
-
Hey John,
My apologies for any issues that you are experiencing with our service. I would definitely like to address any other issues, besides this one, that you may be experiencing. You could either respond to this Q&A thread or submit a private customer support ticket to our help team. If you go to our help hub (www.seomoz.org/help) you can easily submit a ticket by clicking the contact help team button.
As for your duplicate content question, it is important to know that any time the same content is found on more than one URL that it is considered duplicate content. WordPress is a good example where duplicate is often found but can be easily addressed.
In WordPress you could have your homepage www.domain.com and an author page www.domain.com/author/authorname. If your blog only has one author though this author page is going to be identical to your homepage and the result is your site having duplicate content. There are a few ways to resolve this though with the most popular being simply preventing access to the author page and redirecting it back to the homepage. This would prevent other sites from linking to these duplicate pages and they would instead link directly to the homepage.
Another option would be to use meta robots noindex and follow tags on the duplicate page, in this case the author page. This would prevent the page from being indexed but will still allow the links on the page to be found and crawled. You can also prevent access to these pages in your robots.txt file and our crawler can be isolated by using the user-agent rogerbot.
I hope that makes sense.
Let me know if you have any additional questions or concerns.
Kenny
-
Thanks Guy. I was thinking of subscribing to SeoMoz but the site reports have been less than useful. This is just one of 5 issues I've found.
-
So far no. Until they fix that little error you can use Google Webmaster Tool's to double check for real duplicate content.
The spider is seeing whatever.php?var=1 as a different page because some sites just use index.php?p=103 to be a page and p=102 another page. While others use the variables in the URL on the same page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content on SearchResults.asp
hi guys. I'm currently working through the reported crawl errors in Moz Analytics, but an unsure what to do about some of them. for example... Searchresults.asp?search=frankie+says+relax is showing as having duplicate page content and page title as SearchResults.asp?searching=Y&sort=13&search=Frankie+Says+Relax&show=24 There's all sorts of searchresults.asp page being flagged. Is this something i can safely ignore or is it something i should endeavour to rectify? I'm also getting errors reported on shoppingcart.asp pages as well as pindex.asp (product index). I'm thinking i should maybe add disallow/ shoppingcart.asp to my robots text file, but am unsure as to whether i should be blocking robots from the search results pages and product index (which is essentially a secondary sitemap). Any advice would be greatly appreaciated. Thanks, Dave 🙂
Moz Pro | | giddygrafix0 -
Duplicate content in crawl despite canonical
Hi! I've had a bunch of duplicate content issues come up in a crawl, but a lot of them seem to have canonical tags implemented correctly. For example: http://www.alwayshobbies.com/brands/aztec-imports/-catg=Fireplaces http://www.alwayshobbies.com/brands/aztec-imports/-catg=Nursery http://www.alwayshobbies.com/brands/aztec-imports/-catg=Turntables http://www.alwayshobbies.com/brands/aztec-imports/-catg=Turntables?page=0 Aztec http://www.alwayshobbies.com/brands/aztec-imports/-catg=Turntables?page=1 Any ideas on what's happening here?
Moz Pro | | neooptic0 -
How to solve duplicate page title & content error
I got lot of errors in Duplicate page title - 5000 Here the result page is same and content is also same,but it differs only with page no in meta title Title missing error In seomoz report i got empty msg - title,meta desc,meta robots,meta refresh But if i check the link which i got error it shows all meta tags..we have added all meta tags in our site..But i dont no why i got title missing error . 404 error In this report,if i click the link which i got error, it goes to main page of our site. But the url differs. eg: The error link is :www.example.com/buy/requirement-2-0-inmumbai-property it automatically goes to www.example.com page Let me know how to solve these issues.
Moz Pro | | Rajesh.Chandran0 -
Does the page authority data also considers the on page factors like the presence of keyword in the title,meta text, and keyword frequency ??
The moz difficulty score considers four factors for the top websites. are the on page factors included in the page authority data ?
Moz Pro | | iQuanti0 -
Roger keeps telling me my canonical pages are duplicates
I've got a site that's brand spanking new that I'm trying to get the error count down to zero on, and I'm basically there except for this odd problem. Roger got into the site like a naughty puppy a bit too early, before I'd put the canonical tags in, so there were a couple thousand 'duplicate content' errors. I put canonicals in (programmatically, so they appear on every page) and waited a week and sure enough 99% of them went away. However, there's about 50 that are still lingering, and I'm not sure why they're being detected as such. It's an ecommerce site, and the duplicates are being detected on the product page, but why these 50? (there's hundreds of other products that aren't being detected). The URLs that are 'duplicates' look like this according to the crawl report: http://www.site.com/Product-1.aspx http://www.site.com/product-1.aspx And so on. Canonicals are in place, and have been for weeks, and as I said there's hundreds of other pages just like this not having this problem, so I'm finding it odd that these ones won't go away. All I can think of is that Roger is somehow caching stuff from previous crawls? According to the crawl report these duplicates were discovered '1 day ago' but that simply doesn't make sense. It's not a matter of messing up one or two pages on my part either; we made this site to be dynamically generated, and all of the SEO stuff (canonical, etc.) is applied to every single page regardless of what's on it. If anyone can give some insight I'd appreciate it!
Moz Pro | | icecarats0 -
Title too long
When I look at the Crawl Results for Title Too Long, I receive several pages of results. If I want to print these results as a PDF, only the first page prints. When I go to the Next Page and attempt to create a PDF of the second page of results, the PDF only contains results from the first page - which I have already printed. When I ask for a CSV of the results, I receive ALL of the results but NOT the "over by x characters" that I see in the online results and in the PDF. The "over by x characters" would be very helpful! How can I get PDF printouts of ALL pages of the results, not just the first page? Thanks.
Moz Pro | | OliverVT0 -
We were unable to grade that page. We received a response code of 301\. URL content not parseable
I am using seomoz webapp tool for my SEO on my site. I have run into this issue. Please see the attached file as it has the screen scrape of the error. I am running an on page scan from seomoz for the following url: http://www.racquetsource.com/squash-racquets-s/95.htm When I run the scan I receive the following error: We were unable to grade that page. We received a response code of 301. URL content not parseable. This page had worked previously. I have tried to verify my 301 redirects and am unable to resolve this error. I can perform other on page scans and they work fine. Is this a known problem with this tool? I have verified ensuring I don't have it defined. Any help would be appreciated.
Moz Pro | | GeoffBatterham0 -
Reducing duplicate content
Callcatalog.com is a complaint directory for phone numbers. People post information on the phone calls they get. Since there are many many phone numbers, obviously people haven't posted information on ALL of the phone numbers, THUS I have many phone numbers with zero content. SEOMoz is telling me that pages with zero content looks like duplicate content with each other.. The only difference between two pages that have zero coments is the title and phone number embedded in the page. For example, http://www.callcatalog.com/phones/view/413-563-3263 is a page that has zero comments.. I don't want to remove these zero comment phone number pages from the directory since many people find the pages via a phone number search. Here's my question: what can I do to make google / seomoz think that thexe zero comment pages is not dupliicate content?
Moz Pro | | seo_ploom0