Why are there significant changes in the amount of duplicate content without any known action?
-
I've noticed a surprisingly rapid change in duplicate content over the past month. I'd noticed ~6,000 instances of duplicate content, after disavowing bad links we went down to 3k, this makes perfect sense to me. But after that, without doing anything whatsoever, from last Thursday, the 20th, to yesterday the instances of duplicate content decreased again down to 2k. Could this just be a delayed indexing of pages or are there other factors here? Thanks for the help.
-
Come to think of it, the only thing we did do was fire the SEO company that we had working for us, and started doing SEO in house, but this doesn't make sense in terms of rapid shifts in duplicate content.
-
Without really being involved, it is very hard to try and figure this out exactly.
For now, I wouldn't worry unless you start to see problems, such as a drop in the number of pages actually indexed, drop in traffic or searches where you appear.
-Andy
-
To the best of my knowledge we've changed nothing about our site recently which is why I'm trying to attribute this rapid drop to something and the only thing we've done is disavow the links. So the disavow was just a shot in the dark to try to understand these changes.
-
Are you using any parameters (tracking/session id's) on your site? Also, what Andy said--disavowing wouldn't decrease this #. It was something else.
-
You can get problems with duplicate content from all over the web, but a disavow would have absolutely no impact on this. That is to distance you from external links that you don't wish to be associated with.
As this is a something related to the MOZ products, I can't give you an answer on that I'm afraid.
Have you made no actual changes to the site that could account for this? If you can, re-categorise this post to include Product Support.
-Andy
-
Can you look at your crawl diagnostics and see the difference in how many pages were crawled at each of those intervals? That would help diagnose what's happening here.
Thanks
-
I was under the impression that duplicate content can not only be caused by duplicate content on the site but actually also from outside sites, even notable ones, using directly duplicate content. http://moz.com/blog/duplicate-content-in-a-post-panda-world
See below:
(3) Cross-domain Duplicates
A cross-domain duplicate occurs when two websites share the same piece of content:
These duplicates could be either “true” or “near” duplicates. Contrary to what some people believe, cross-domain duplicates can be a problem even for legitimate, syndicated content.
Anyway, we're using Moz's dashboard to give us insights into duplicate content.
-
Hi,
First of all, disavowing will have nothing to do with the number of duplication warnings you get. This can only affect inbound links and even then, you won't see any drop in these through Webmaster Tools.
What are you using to see the duplicate pages?
-Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
MOZ not accepting recent changes
We have made some changes on our website according to the SEO standards per Moz report and did some crawler tests but it seems that MOZ is not crawling our recent changes. It is showing the results according to the previous data. What can be the problem here?
Moz Pro | | torbett0 -
Videos on duplicate content editing
Hi, I am looking for good videos with visual examples on how to edit duplicate content issues. I am editing a law firms website, and for the most part the duplicate issues seem to show up in tag URL's on the blog. I feel like I have maybe half of the picture figured out, but I am not sure how or where to make changes. I have gone through the crawl diagnostic issues and a few articles, but I know I am a visual learner. Therefore a video might be helpful. Does anyone have any suggestions on where to get started? Thanks.
Moz Pro | | DigitalEnvy0 -
Codeigniter - Controller and duplicate pages
Hi there, I use Codeigniter as framework and I have a question about the duplicate page. Actually, for default, the typical page in a CodeIgniter framework is something like this: http://www.domain.com/site/contact where site is the controller containing the contact function that point to the contact.html view... To have a better URL I use a trick with the "routes" that redirect any http://www.domain.com/contact to the original http://www.domain.com/site/contact Of course the both are valid and the both are... crawled! So I get the duplicate page. Is this something I have to manage, maybe with .htaccess? Any idea would be very appreciated. Thanks for you precious time guys! Shella
Moz Pro | | CarloShellaMascella0 -
Ranking dropped significantly
We have been doing SEO optimization based on SEOMoz reco. However, our page ranking for key words dropped significantly last night.. We used to be on page 3 but now could not even found it on top 20 pages. Not sure what happened last night. I checked google webmaster tool but did not find anything. Can you help?
Moz Pro | | hisstory20010 -
Can you change RankTracker's setting?
Is it possible to change RankTrackers settings to see if my keywords are in the top 500 spots instead of only the top 50?
Moz Pro | | theLotter0 -
Duplicate pages with canonical links still show as errors
On our CMS, there are duplicate pages such as /news, /news/, /news?page=1, /news/?page=1. From an SEO perspective, I'm not too worried, because I guess Google is pretty capable of sorting this out, but to be on the safe side, I've added canonical links. /news itself has no link, but all the other variants have links to "/news". (And if you go wild and add a bunch of random meaningless parameters, creating /news/?page=1&jim=jam&foo=bar&this=that, we will laugh at you and generate a canonical link back to "/news". We're clever like that.) So far so good. And everything appears to work fine. But SEOMoz is still flagging up errors about duplicate titles and duplicate content. If you click in, you'll see a "Note" on each error, showing that SEOMoz has found the canonical link. So SEOMoz knows the duplication isn't a problem, as we're using canonical links exactly the way they're supposed to be used, and yet is still flagging it as an error. Is this something I should be concerned about, or is it just a bug in SEOMoz?
Moz Pro | | LockyDotser0 -
Tool for tracking actions taken on problem urls
I am looking for tool suggestions that assist in keeping track of problem urls, the actions taken on urls, and help deal with tracking and testing a large number of errors gathered from many sources. So, what I want is to be able to export lists of url's and their problems from my current sets of tools (SEOmoz campaigns, Google WM, Bing WM,.Screaming Frog) and input them into a type of centralized DB that will allow me to see all of the actions that need to be taken on each url while at the same time removing duplicates as each tool finds a significant amount of the same issues. Example Case: SEOmoz and Google identify urls with duplicate title tags (example.com/url1 & example.com/url2) , while Screaming frog sees that example.com/url1 contains a link that is no longer valid (so terminates in a 404). When I import the three reports into the tool I would like to see that example.com/url1 has two issues pending, a duplicated title and a broken link, without duplicating the entry that both SEOmoz and Google found. I would also like to see historical information on the url, so if I have written redirects to it (to fix a previous problem), or if it used to be a broken page (i.e. 4XX or 5XX error) and is now fixed. Finally, I would like to not be bothered with the same issue twice. As Google is incredibly slow with updating their issues summary, I would like to not important duplicate issues (so the tool should recognize that the url is already in the DB and that it has been resolved). Bonus for any tool that uses Google and SEOmoz API to gather this info for me Bonus Bonus for any tool that is smart enough to check and mark as resolved issues as they come in (for instance, if a url has a 403 error it would check on import if it still resolved as a 403. If it did it would add it to the issue queue, if not it would be marked as fixed). Does anything like this exist? how do you deal with tracking and fixing thousands of urls and their problems and the duplicates created from using multiple tools. Thanks!
Moz Pro | | prima-2535090 -
Open Site Explorer vs. Linkscape... without mozRank passed how do you measure link value?
Hi all, Since Linkscape was retired we've been struggling without the mozRank passed data for our competitor research. However, I've read that it wasn't a particularly reliable metric to go on so my question is: what makes a link more valuable, in your opinion? I'm trying to work out the relation between Page Authority / Domain Authority / Number of Links etc., ideally to come up with some form of ranking system where we can say: Overall link value = ( (A x Page Authority) x (B x Domain Authority) x (C x Number of Links) ) / Number of Linking Domains or something like that. Is this a good way to go about it? What do you guys look at when using the OSE data? Thanks in advance for your help!
Moz Pro | | OmarKattan0