An Unfair Content related penalty :(
-
Hi Guys,
Google.com.au
website: http://partysuppliesnow.com.au/We had a massive drop in search queries in WMT around the 11th of september this year, I investigated and it seemed as though there were no updates around this time.
Our site is only receiving branded search now - and after investigating i am led to believe that Google has mistakingly affected our website in the panda algorithm. There are no manual penalties applies on this site as confirmed by WMT.
Our product descriptions are pretty much all unique but i have noticed that when typing a portion of text from these pages into google search using quotation marks, shopping affiliate sites which we use are being displayed first and our page no where to be seen or last in the results. This leads me to believe that Google thinks we have scraped the content from these sites when in actual fact they have from us. We also have G+ authorship setup.
Typing a products full name into Google (tried a handful) our site is not in the top 100 or 200 at times, i think this further clarifies that we are penalised.
We would really appreciate some opinions on this. Any course of actions would be great. We don't particularly want to invest in writing content again.
From our point of view it looks like Google is stopping our site from ranking because it's getting mixed up with who the originator for our content is.
Thanks and really appreciate it.
-
Hey Jarrod,
I'm afraid there isn't anything you can actually do to tell Google you are the original author of your content, other than the tips Remus mentioned.
However, there is a service that you can use to help you identify sites that are duplicating your content. It's called Copysentry and it automatically scans the web to check for content duplication. You could use this, in conjunction with DMCA take down requests (as mentioned in Remus's post) to help to defend against this in future.
-
Hi guys,
Thank you all, for your kind advices. We have planned to re-write our content (product descriptions). Now, we will write 2 types of descriptions. 1 for our site and 1 for our affiliates (who promote our products). We hope Google won't confuse it this time.
As we are going to write the content again. I am still afraid, it could be stolen again. So, is there a way that we could tell Google that we are the originator of this new content???
If there isn't any solution, I think, we would lose our ranking again. Right??? I don't wanna lose our efforts again. So, can you suggest any concrete solution???
thanks again guys
Jarrod -
Our product descriptions are pretty much all unique but i have noticed that when typing a portion of text from these pages into google search using quotation marks, shopping affiliate sites which we use are being displayed first and our page no where to be seen or last in the results.
I saw the same thing. There is your problem.
This leads me to believe that Google thinks we have scraped the content from these sites when in actual fact they have from us. We also have G+ authorship setup.
Although google says that they are "pretty good" at attributing content to the creator the truth is that the suck at it.
Lots of people have this problem. Guard your content so it doesn't get out to affiliates and shopping engines. This means strongly enforced rules for your affiliates and blocking crawlers from your site - but allowing google in.
-
In addition going forward you should always ensure you have two types of content. A set of content you use on your site, and another set of content that you supply to affiliate sites and any other sites you supply products too.
I know this isn't much help now, but its something you should do in future to prevent such issues.
-
Hi Jarrod,
You are in a very complicated situation. I hope you can find a solution.
This video posted by Matt Cutts a wile ago might help you with a few additional tips:
How can I make sure that Google knows my content is original?
- DMCA request: http://www.google.com/dmca.html
- Google News source attribution metatags: link here
- Or even spam report like Matt Cutts suggests.
-
Hi Jarrod,
The first thing I noticed, a lot of pages in your site don't contain a rel=canonical tag. For example, this one: http://www.partysuppliesnow.com.au/view-products/96/LED-Furniture
We know that Google is not particularly good at identifying the original source of a content. So, you can report the sites that scraped your content to Google (https://www.google.com/webmasters/tools/spamreport?hl=en). That'll let Google know about the issue and hopefully lift the penalty off your site and penalize the other site.
Another issue could be the Authorship setup on product pages. It's considered as Authorship abuse. Generally, you don't want to link a Google+ profile with a site's homepage and other generic pages.
I've had some experience with Panda. I can say no-indexing is very effective in fighting Panda. If you know about a significant number of low-quality pages in your site, that you wouldn't prefer to open as a searcher, you should add a meta no-index tag in the section of those pages. It takes some time to get out of the Panda box.
Regards,
Rohit
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Ideal Frequency of my related keywords
Let's say on my keyword If I take the 1 st 20 results of google they have an average number of words per page of 3000 words. On my page I only have 1500 words. Does it mean that the frequency of my related keywords should be half of what the others have ? What about if I am over their frequency is it a problem also ? Thank you,
Intermediate & Advanced SEO | | seoanalytics0 -
Putting my content under domain.com/content, or under related categories: domain.com/bikes/content ?
Hello This questions plays on what Joe Hall talked about during this years' MozCon: Rethinking Information Architecture for SEO and Content Marketing. My Case:
Intermediate & Advanced SEO | | Inevo
So.. we're working out guidelines and templates for a costumer (sporting goods store) on how to publish content (articles, videos, guides) on their category pages, product pages, and other pages. At this moment I have 2 choices:
1. Use a url-structure/information architecture where all the content is placed in one subfolder, for example domain.com/content. Although it's placed here, there's gonna be extensive internal linking from /content to the related category pages, so the content about bikes (even if it's placed under domain.com/bikes) will be just as visible on the pages related to bikes. 2. Place the content about bikes on a subdirectory under the bike category, **for example domain.com/bikes/content. ** The UX/interface for these two scenarios will be identical, but the directories/folder-hierarchy/url structure will be different. According to Joe Hall, the latter scenario will build up more topical authority and relevance towards the category/topic, and should be the overall most ideal setup. Any thoughts on which of the two solutions is the most ideal? PS: There is one critical caveat her: my costumer uses many url-slugs subdirectories for their categories, for example domain.com/activity/summer/bikes/, which means the content in the first scenario will be 4 steps away from the home page. Is this gonna be a problem? Looking forward to your thoughts 🙂 Sigurd, INEVO0 -
No content using Fetch
Wooah, this one makes me feel a bit nervous. The cache version of the site homepage shows all the text, but I understand that is the html code constructed by the browser. So I get that. If I Google some of the content it is there in the index and the cache version is yesterday. If I Fetch and Render in GWT then none of the content is available in the preview - neither Googlebot or visitor view. The whole preview is just the menu, a holding image for a video and a tag line for it. There are no reports of blocked resources apart from a Wistia URL. How can I decipher what is blocking Google if it does not report any problems? The CSS is visible for reference to, for example, <section class="text-within-lines big-text narrow"> class="data"> some content... Ranking is a real issue, in part by a poorly functioning main menu. But i'm really concerned with what is happening with the render.
Intermediate & Advanced SEO | | MickEdwards0 -
Different content on different mobile browsers
Is it ok to run different html & different content on different mobile browsers even though the url is same. or the site can get penalize ?
Intermediate & Advanced SEO | | vivekrathore0 -
Above the Fold Content
How important is the placement of unique content "Above the Fold". Will attention grabbing images suffice or must their be a lot of unique text?
Intermediate & Advanced SEO | | casper4340 -
Duplicate content even with 301 redirects
I know this isn't a developer forum but I figure someone will know the answer to this. My site is http://www.stadriemblems.com and I have a 301 redirect in my .htaccess file to redirect all non-www to www and it works great. But SEOmoz seems to think this doesn't apply to my blog, which is located at http://www.stadriemblems.com/blog It doesn't seem to make sense that I'd need to place code in every .htaccess file of every sub-folder. If I do, what code can I use? The weirdest part about this is that the redirecting works just fine; it's just SEOmoz's crawler that doesn't seem to be with the program here. Does this happen to you?
Intermediate & Advanced SEO | | UnderRugSwept0 -
Duplicate Content from Article Directories
I have a small client with a website PR2, 268 links from 21 root domains with mozTrusts 5.5, MozRank 4.5 However whenever I check in google for the amount of link: Google always give the response none. My client has a blog and many articles on the blog. However they have submitted their blog article every time to article directories as well, plain and simle creating duplicate and content. Is this the reason why their link: is coming up as none? Is there something to correct the situation?
Intermediate & Advanced SEO | | danielkamen0 -
Duplicate page Content
There has been over 300 pages on our clients site with duplicate page content. Before we embark on a programming solution to this with canonical tags, our developers are requesting the list of originating sites/links/sources for these odd URLs. How can we find a list of the originating URLs? If you we can provide a list of originating sources, that would be helpful. For example, our the following pages are showing (as a sample) as duplicate content: www.crittenton.com/Video/View.aspx?id=87&VideoID=11 www.crittenton.com/Video/View.aspx?id=87&VideoID=12 www.crittenton.com/Video/View.aspx?id=87&VideoID=15 www.crittenton.com/Video/View.aspx?id=87&VideoID=2 "How did you get all those duplicate urls? I have tried to google the "contact us", "news", "video" pages. I didn't get all those duplicate pages. The page id=87 on the most of the duplicate pages are not supposed to be there. I was wondering how the visitors got to all those duplicate pages. Please advise." Note, the CMS does not create this type of hybrid URLs. We are as curious as you as to where/why/how these are being created. Thanks.
Intermediate & Advanced SEO | | dlemieux0