How Does This Site Get Away With It?
-
The following site is huge in the movie trailer industry:
It ranks #3 in Google for "Movie Trailers" and has high rankings for multiple other major keywords in the industry.
Here's the thing; virtually all of their movie trailer pages contain copy/pasted content from other sites. The movie trailer descriptions are the ones given by the movie companies and therefor the same content is on thousands of websites/blogs.
We all know Google hates duplicate content at the moment... so how does this site get a away with it?
Does it's root-domain authority keep it up there?
-
I have seen in some instances where sites will add iframes if they have to use duplicate content, then nofollow the iframes, and tell the robot file to ignore them so they aren't analyzed. It works actually, but it's sloppy code having a iframe on every page sometimes multiple. Or at least it is to me for that purpose, but I guess it's better than getting penalized for dup content.
Have a great night.
Matthew Boley
-
Hey Rhys.
A few questions.
Does this site have an affiliate feed.
For example, are other people copying the content from this site through an affiliate feed of some sort.
That could be one of the example here.
( actually they do just found it )
In the case of Dup Content, Google looks at the how trustful a sites content is based on this overall ranking. So if this site is putting out content everyday and then pinging the google bot to come crawl the site first, thats all they would need to be verified as the originator of the content. The other sites get then can easily copy the content from this site, but as long as it gets indexed on this site first, they would not really have a problem.
Same thing goes for a blog you write.
You would have to dig a little deeper with their link structure. They overall ranking is pretty high they have tones of spun link from each url, about 100 linked pages for each domain name . A few good links from Mtv and Vh1 and mostly a lot of blogs.
But ya your right their SE traffic is off the charts . They rank really well for some movie names as well.
what is your end goal in running a comparison against them ?
They root domain is fairly high and that does play a big factor in how well they get ranked for a lot of these keywords as well.
their age of site is about 5.3 years and their domain authority is around 74 to 79 , depending where you look.
PR6 but you might need to dig deeper.
-
Could be that the other key factors like traffic and incoming links are astronomical, so Google's content dupe penalty is out weighed.
It's a case of "doing it better" and not necessarily doing it the first time. While they may be scraping content and it's all duped - they simply must get the huge traffic numbers and incoming links because it's content all in the one place.
I'm just assuming of course - without doing an audit who knows - but must be frustrating for you if you are working on a campaign against them.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Infinite Scrolling on Publisher Sites - is VentureBeat's implementation really SEO-friendly?
I've just begun a new project auditing the site of a news publisher. In order to increase pageviews and thus increase advertising revenue, at some point in the past they implemented something so that as many as 5 different articles load per article page. All articles are loaded at the same time and from looking in Google's cache and the errors flagged up in Search Console, Google treats it as one big mass of content, not separate pages. Another thing to note is that when a user scrolls down, the URL does in fact change when you get to the next article. My initial thought was to remove this functionality and just load one article per page. However I happened to notice that VentureBeat.com uses something similar. They use infinite scrolling so that the other articles on the page (in a 'feed' style) only load when a user scrolls to the bottom of the first article. I checked Google's cached versions of the pages and it seems that Google also only reads the first article which seems like an ideal solution. This obviously has the benefit of additionally speeding up loading time of the page too. My question is, is VentureBeat's implementation actually that SEO-friendly or not. VentureBeat have 'sort of' followed Google's guidelines with regards to how to implement infinite scrolling https://webmasters.googleblog.com/2014/02/infinite-scroll-search-friendly.html by using prev and next tags for pagination https://support.google.com/webmasters/answer/1663744?hl=en. However isn't the point of pagination to list multiple pages in a series (i.e. page 2, page 3, page 4 etc.) rather than just other related articles? Here's an example - http://venturebeat.com/2016/11/11/facebooks-cto-explains-social-networks-10-year-mission-global-connectivity-ai-vr/ Would be interesting to know if someone has dealt with this first-hand or just has an opinion. Thanks in advance! Daniel
White Hat / Black Hat SEO | | Daniel_Morgan1 -
Multiple sites in the same niche (Should we redirect these to our Main Site)
I will keep this short and sweet. We have some websites in the same niche area but want to focus only on our newest site (basically all the information that was being posted on the other sites will now be part of our new site) This will save us a lot of time and increase our focus on 1 entity. Should we redirect these website with a 301 redirect to the specific categories that they focus on in the new site? or should we redirect to the main domain.
White Hat / Black Hat SEO | | CMcMullen0 -
Do some sites get preference over others by Google just because? Grandfathered theory
So I have a theory that Google "grandfathers" in a handful of old websites from every niche and that no matter what the site does, it will always get the authority to rank high for the relevant keywords in the niche. I have a website in the crafts/cards/printables niche. One of my competitors is http://printable-cards.gotfreecards.com/ This site ranks for everything... http://www.semrush.com/info/gotfreecards.com+(by+organic) Yet, when I go to visit their site, I notice duplicate content all over the place (extremely thin content, if anything at all for some pages that rank for highly searched keywords), I see paginated pages that should be getting noindexed, bad URL structure and I see an overall unfriendly user experience. Also, the backlink profile isn't very impressive, as most of the good links are coming from their other site, www.got-free-ecards.com. Can someone tell me why this site is ranking for what it is other than the fact that it's around 5 years old and potentially has some type of preference from Google?
White Hat / Black Hat SEO | | WebServiceConsulting.com0 -
Multiple Versions of Mobile Site
Hey Guys, We have recently finished the latest version of our mobile site which means currently we have 2 mobile sites. Depending on what device and Os will depend on which site you will be presented with.
White Hat / Black Hat SEO | | seekjobs
e.g.
iPhone 3 or 4 users on iOS4 will get version 1 of our mobile site
iPhone 5 users on iOS5 will get the new version (version 2) of our mobile site. Our old mobile site is currently indexed in Google and performing pretty well.
Since the launch of the second mobile site we have not see any major changes to our visibility in Google and so was curious My main concern here is duplicate content so I am curious can Google detect that we have 2 mobile site that we serve depending on device? And if Google can detect this, why has our sites not been penalized! Thanks, LW I know the first thing that comes to your mind is Duplicate content0 -
Does Google Consider a Follow Affiliate Link into my site a paid link?
Let's say I have a link coming into my domain like this http://www.mydomain.com/l/freerol.aspx?AID=674&subid=Week+2+Freeroll&pid=120 Do you think Google recognizes this as paid link? These links are follow links. I am working on a site that has tons of these, but ranks fairly well. They did lose some ranking over the past month or so, and I am wondering if it might be related to a recent iteration of Penguin. These are very high PR inbound links and from a number of good domains, so I would not want to make a mistake and have client get affiliates to no follow if that is going to cause his rankings to drop more. Any thoughts would be appreciated.
White Hat / Black Hat SEO | | Robertnweil10 -
Pages Getting Deindexed
My Question Is I have 16 pages on my site that were all indexed until yesterday now there are only 3 indexed. I tried resubmitting my site map, and when i did it was the same result as before 3 pages indexed and 13 pages deindexed. I was wondering if someone could explain to me why this is happening and what I can do to fix it? Keep in mind my site is almost three months old, and this has happened before but, it fixed itself over time thanks.
White Hat / Black Hat SEO | | ilyaelbert0 -
Google-backed sites' link profiles
Curious what you SEO people think of the link profiles of these (high-ranking) Google-backed UK sites: http://www.opensiteexplorer.org/domains?site=www.startupdonut.co.uk http://www.opensiteexplorer.org/domains?site=www.lawdonut.co.uk http://www.opensiteexplorer.org/domains?site=www.marketingdonut.co.uk http://www.opensiteexplorer.org/domains?site=www.itdonut.co.uk http://www.opensiteexplorer.org/domains?site=www.taxdonut.co.uk Each site has between 40k and 50k inlinks counted in OSE. However, there are relatively few linking root domains in each case: 273 for marketingdonut 216 for startupdonut 90 for lawdonut 53 for itdonut 16 for taxdonut Is there something wrong with the OSE data here? Does this imply that the average root domain linking to the taxdonut site does so with 2857 links? The sites have no significant social media stats. The sites are heavily inter-linked. Also linked from the operating business, BHP Information Solutions (tagline "Gain access to SMEs"). Is this what Google would think of as a "natural" link profile? Interestingly, they've managed to secure links on quite a few UK local authority resources pages - generally being the only commercial website on those pages.
White Hat / Black Hat SEO | | seqal0 -
Is it possible that since the Google Farmer's Update, that people practicing Google Bowling can negatively affect your site?
We have hundreds of random bad links that have been added to our sites across the board that nobody in our company paid for. Two of our domains have been penalized and three of our sites have pages that have been penalized. Our sites are established with quality content. One was built in 2007, the other in 2008. We pay writers to contribute quality and unique content. We just can't figure out a) Why the sites were pulled out of Google indexing suddenly after operating well for years b) Where the spike in links came from. Thanks
White Hat / Black Hat SEO | | dahnyogaworks0