I think Panda was a conspiracy.
-
It's just a theory, but I think that Panda was not really an algorithm update but rather a conspiracy.
Google went out of their way to announce that a new algorithm was being rolled out. The word on the street was that content farms would be affected. Low quality sites would be affected. Scrapers would be affected. So, everyone with decent sites sat back and said, "Ah...this will be good...my rankings will increase."
And then, the word started coming in that some really good sites took a massive hit. We've got a lot of theories on what could be causing the hit, but there doesn't seem to be an obvious fix.
Many of the key factors that have been suggested causes of a site to look bad in Panda's eyes are present on one of my sites, but this site actually increased in rankings after Panda.
So, this is my theory: I think that Google made some random changes that made no sense. They made changes that would cause some scraper sites to go down but they also knew that many decent sites would decline as well.
Why would they do this? The result is fantastic in Google's eyes. They have the whole world of web design doing all they can to create the BEST quality site possible. People are removing duplicate content, reducing ad clutter and generally creating the best site possible. And this, is the goal of Larry Page and Sergey Brin...to make it so that Google gives the user the BEST possible sites to match their query.
I think that a month or so from now there will be a sudden shift in the algo again and many of those decent sites will have their good rankings back again. The site owners will think it's because they put hard work into creating good quality, so they will be happy. And Google will be happy because the web is a better place.
What do you think?
-
hahahaha. I agree with you. First Google manipulates results to see if Bing is copying. Soon after this update comes with all the care that is creating its own content sites optimized for the robots and we can not think of duplicate content. hehehehe
-
Actually I really liked the "content registry" idea.
An library of content where you could register what you have created and, optional, a link to where you want to be considered the main source.
At least it would be 10x more usefull than the google knol idea..
-
I would pay a fee to protect my best content in the Google SERPs.
-
Actually you have a point there... if JC Penny was indeed the catalyst (which I could easily imagine it being) then the time between that and the update would surely mean it would have to have been rushed. I never considered that before.
-
ha ha... I think they did rush this out.... they were quickly trying to pull up their pants after getting embarassed from the JCPenny problem... they needed to bust a few heads quickly...
-
Ha ha, maybe
I think it's something infinitely less planned out and they simply rushed this change out the door without understanding fully what it would do to the SERPs.
Although I do think you're right that in a few months (in what will be claimed to be a second Panda sweep) that things will go back and only the very worst offenders will stay penalised.
-
Yes I like the content registry idea! It would probably be necessary to pay for it as a service though, and to cover dupes that are okay maybe they could just allow dupes as long as they reference back to the source in the registry (for news, quotations, etc... where dupes can't be avoided).
-
Interesting ideas. Thanks for sharing them.
I think that Google is talking a lot about this as a "quality website update"... and that is getting them attention in the media but it is also kicking a lot of webmasters in the butt to clean up their websites.
I think that google should make a "content registry" where I can submit my content and say "this is mine" and then copies or spins of that content will not get traction in the SERPs.
And, I think that they should take a closer look at websites in the Adsense program because the ability to monetize crap and theft is driving lot of bad odor in the SERPs.
-
Haha I like it!!
Well, if it's not what happened, they'll wish they thought of it anyway lol
My view on why other sites got hit is just that they had at least some links coming from sites that got hit... i.e. got 100 backlinks, 10 are from articles on article sites, article sites get hit... lose 10 backlinks (or at least lose some of the value from some of those backlinks)... hence, good site takes a hit too
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What do You Think the Biggest Search Trends will Be in 2020?
Just interested to hear everyone's thoughts. Personally I think that, even though voice search is already pretty big - it's about to get much, much larger and this may even entirely change the infrastructure of the web. What are your thoughts for changes in search 2020?
Algorithm Updates | | effectdigital2 -
Do you think this page has been algorithmically penalised or is it just old?
Here is the page: http://www.designquotes.com.au/business-blog/top-10-australian-business-directories-in-2012/ It's fairly old, but when it was first written it hit #1 for "business directories". After a while it dropped but was receieving lots of traffic for long tail variations of "business directories Australia" As of the 4th of October (Penguin 2.1) it lost traffic and rankings entirely. I checked it's link profile and there isn't anything fishy: From Google Webmaster https://docs.google.com/spreadsheet/ccc?key=0AtwbT3wshHRsdEc1OWl4SFN0SDdiTkwzSmdGTFpZOFE&usp=sharing In fact, two links are entirely natural http://blog.businesszoom.com.au/2013/09/use-customer-reviews-to-improve-your-website-ranking/ http://dianajones.com.au/google-plus-local-equals-more-business-blog/ Yet when I search for a close match in title in Google AU, the article doesn't appear within even the first 4 pages. https://www.google.com.au/#q=top+10+Australian+Business+Directories&start=10 Is this simple because it's an old article? Should I re-write it, update the analysis and use a rel=canonical on the old article to the new?
Algorithm Updates | | designquotes0 -
I think my inbound link anchor text looks un-natural to google - How to fix?
Hi all, For a bit of back ground see this question i posted recently: http://www.seomoz.org/q/lost-over-65-of-organic-visits-since-sept-please-help From the responses there and looking into my backlinks and my competitors i can see an issue with the anchor text on my inbound links... nearly all keywords and very very few brand names etc... From what i can gather (using open site explorer) the page in question has: 1100 inbound links from 900 domains These use 90 different anchor texts 106 of these links use my brand / website name in the anchor text These 106 links are spread over 18 domains (73 from 1 directory) About 5-10% of the links are from directories, the rest are from what i would describe as "proper websites" From my very limited knowledge of this, the issue is my brand / website should have a far higher ratio of links using it as the anchor text then any keyword... which as you can see from the above is not the case... If it wasnt for that 1 directory there would only be 33 links with my brand from over 1000... I need to start fixing this, but was wondering how to start... Below are a list of options i could try, i have no idea if these would help or hinder, any advice you could give on the potential affects of below options would be very helpful: Options (the below are hypothetical, i have no idea if i will be able to get it done - Just thinking out loud here): Get as many as possible of the "directory" links removed Remove keywords from 50-60% of links and replace with branding Or Try to add branding to 50-60% of the anchor texts something like [Brand] + [keyword] Forget about whats been done previously / changing it will not help in anyway / and focus on branding in anchor text for any future link building? Thanks James
Algorithm Updates | | isntworkdull0 -
Do you think Google is destroying search?
I've seen garbage in google results for some time now, but it seems to be getting worse. I was just searching for a line of text that was in one of our stories from 2009. I just wanted to check that story and I didn't have a direct link. So I did the search and I found one copy of the story, but it wasn't on our site. I knew that it was on the other site as well as ours, because the writer writes for both publications. What I expected to see was the two results, one above the other, depending on which one had more links or better on-page for the query. What I got didn't really surprise me, but I was annoyed. In #1 position was the other site, That was OK by me, but ours wasn't there at all. I'm almost used to that now (not happy about it and trying to change it, but not doing well at all, even after 18 months of trying) What really made me angry was the garbage results that followed. One site, a wordpress blog, has tag pages and category pages being indexed. I didn't count them all but my guess is about 200 results from this blog, one after the other, most of them tag pages, with the same content on every one of them. Then the tag pages stopped and it started with dated archive pages, dozens of them. There were other sites, some with just one entry, some with dozens of tag pages. After that, porn sites, hundreds of them. I got right to the very end - 100 pages of 10 results per page. That blog seems to have done everything wrong, yet it has interesting stats. It is a PR6, yet Alexa ranks it 25,680,321. It has the same text in every headline. Most of the headlines are very short. It has all of the category and tag and archive pages indexed. There is a link to the designer's website on every page. There is a blogroll on every page, with links out to 50 sites. None of the pages appear to have a description. there are dozens of empty H2 tags and the H1 tag is 80% through the document. Yet google lists all of this stuff in the results. I don't remember the last time I saw 100 pages of results, it hasn't happened in a very long time. Is this something new that google is doing? What about the multiple tag and category pages in results - Is this just a special thing google is doing to upset me or are you seeing it too? I did eventually find my page, but not in that list. I found it by using site:mysite.com in the search box.
Algorithm Updates | | loopyal0 -
What do you think Google analyzes for SERP ranking?
I've been doing some research trying to figure out how the Google algorithm works. The one thing that is constant is that nothing is constant. This makes me believe that Google takes a variable that all sites have and divides it by that number. One example would be taking the load time in MS and dividing it by the total number or points the website scored. This would give all of the websites a random appearance since there that variable would throw off all the other constants. I'm going to continue doing research but I was wondering what you guys think matters in the Google Algorithm. -Shane
Algorithm Updates | | Seoperior0 -
Yet another Panda question
Hi Guys, I'm just looking for confirmation on something..... In the wake of Panda 2.2 one of my pages has plummeted in the rankings whilst other similar pages have seen healthy improvements. Am I correct in thinking that Panda effects individual pages and doesn't tar an entire site with the same brush? Really I'm trying to see if Panda is the reason in the drop on one page or whether it could be something else. The page in question has dropped 130 positions - not just a general fluctuation. Thanks in advance for your responses!!!
Algorithm Updates | | A_Q0 -
Was Panda applied at sub-domain or root-domain level?
Does anyone have any case studies or examples of sites where a specific sub-domain was hit by Panda while other sub-domains were fine? What's the general consensus on whether this was applied at the sub-domain or root-domain level? My thinking is that Google already knows broadly whether a "site" is a root-domain (e.g. SEOmoz) or a sub-domain (e.g. tumblr) and that they use this logic when rolling out Panda. I'd love to hear your thoughts and opinions though?
Algorithm Updates | | TomCritchlow1 -
Another Panda update?
Our high quality (IMHO!) editorial technology site got hit by around ~20% of Google search referrals when Panda went live in the UK on the 12th. But the last couple of days it's returned to "normal". Is that the effects of our frantic scrabbling around trying things, or have Google tweaked the algorithm again, to remove the editorial sites which got caught accidentally?
Algorithm Updates | | StuartAnderton0