Why are these pages considered duplicate page content?
-
A recent crawl diagnostic for a client's website had several new duplicate page content errors.
The problem is, I'm not sure where the error comes from since the content in the webpage is different from one another.
Here's the pages that SEOMOZ reported to have duplicate page content errors:
http://www.imaginet.com.ph/wireless-internet-service-providers-term
http://www.imaginet.com.ph/antivirus-term
http://www.imaginet.com.ph/berkeley-internet-name-domain
http://www.imaginet.com.ph/customer-premises-equipment-term
The only thing similar that I see is the headline which says "Glossary Terms Used in this Site" - I hope that the one sentence is the reason for the error.
Any input is appreciated as I want to find out the best solution for my client's website errors.
Thanks!
-
Maybe its a sample.
but what i would do,is look at the souce code. there is a huge amount of script and menu content that could posible be removed. you content is almost at the bottom of the page. it looks like about 2% of the page. get rid of the scripts into external files, get rid of the menus and try to get original content up to higher %
-
I understand what you're saying.
However, there are several hundreds of pages like that which are 1 word definition pages like the 4 ones that I've mentioned above.
Here's one example of a 1 word definition page that did not report any error:
http://www.imaginet.com.ph/local-area-network-definition
Among all those pages, there's only around 14-20 that are getting duplicate page content errors (including the four I mentioned above).
That is why I'm curious to find out the reason why the four above reported duplicate page errors.
Thanks!
-
Hi John,
Its shows you duplicate pages because, too little difference in search engine readabel text between all this pages.
Like you compare the title and only two words are different. You compare the page content only middle paragraph is different in all other pages.
Try to right different titles and description for all the pages. And Possible Add little bit more content.
I think it will helps a page to be stand as a unique.
Thanks.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Next JS and Missing content
Hello
Moz Pro | | 4thWhale
We recently migrated our page to next JS which is supposed to be great for SEO
On almost all our pages we are getting the same errors Missing Canonical Tag Missing Title Missing or Invalid H1 Missing Description We don't understand this because we have all of that content on every page. We believe that maybe NextJs is having a incompatibility with Moz. Has anyone had any experience with this?0 -
How can I deal with tag page duplicate issues
The Moz crawler reported some dupliated issues. Many of them have to do with tags.
Moz Pro | | IamKovacs
Each tag has a link, and as some articles are under several tags, these come up as duplicate content. I read Dr Peter's piece on Canonical stuff, but it's not clear to me if any of these are the solution. Perhaps the solution lies somewhere else? Maybe I need to block the robots from these urls (But that seems counter-SEO-productive) Thanks
Kovacs0 -
Pages with Temporary Redirect (CTA)
I had MOZ crawl my site and I had 5 CTA pages with a temporary redirect. How do I correct the issue? Thank You! -Nick
Moz Pro | | X2Metrology10 -
Seomoz legacy pages?
Hello, I am finding that I miss several of the old seomoz sections. The legacy tools in particular like the visual website comparison. Where is that now? Also, where is the ongoing list of the top 100 sites? So much was lost in the shift to MOZ, I hope some of the good old stuff is still available. Thank you, Nolan
Moz Pro | | QuietProgress0 -
Concerned About Individual Pages
Okay. I've setup a campaign for www.site.com and given a list of keywords. So after the initial crawl we'll have some results. What I'm looking for tho is how do individual pages on my site rank for the list of keywords given. And then be able to go to a screen in seomoz with data for that particular page with recommendations and stuff like that. Is this what's going to happen or do I need to create a campaign for each url i want to track? If all will work as I'd like in the example above, should I then add the second list of keywords that some other pages should rank for? Will it get to be a big mess or can I relate the keywords to pages in some way? It seems like what I'm looking for is what this program should be... Thanks!
Moz Pro | | martJ0 -
SEOmoz duplicate content checker
From my reports in seomoz i can see pages that are showing as having duplicate content but when i click on them it does not show me which pages are carrying the duplicate content? Is there any way to check this via semoz reports?
Moz Pro | | jazavide0 -
Missing Page Titles On The Comptetive Link Comparison Page
Hello, When I do a Link Analysis using the SEOmoz tools I have noticed that most of the pages listed on the Top Pages tab show [No Data] for page title. Any idea why that could be? The page source of those pages have one and only one <title>tag.</p> <p>Thanks!</p></title>
Moz Pro | | andersvin0 -
Finding the source of duplicate content URL's
We have a website that displays a number of products. The product has variations (sizes) and unfortunately every size has its own URL (for now anyway). Needless to say, this causes duplicate content issues. (And of course, we are looking to change the URL's for our site as soon as possible) However, even though these duplicate URL's exist, you should not be able to land on them by navigating through the site. In theory, the site should always display the link to the smallest size. It seems that there is a flaw in our system somewhere, as these links are now found in our campaign here on SEOmoz. My question: is there any way to find the crawl path that lead to the URL's that shouldn't have been found, so we can locate the problem?
Moz Pro | | DocdataCommerce0