Help, really struggling with fixing mistakes post-Penguin
-
We had previously implemented a strategy of paying for lots of links and focusing on 3 or 4 keywords as our anchors, which used to REALLY work (I know, I know, bad black hat strategy - I have since learned my lesson). These keywords and others have since plummeted up to 100 spots since Panda 3.3 and Penguin. So I'm trying to go in and fix all our mistakes cuz our domain is too valuable to us just to start over from scratch.
Yesterday I literally printed a 75 page document of all of our links according to Open Site Explorer. I have been going in and manually changing anchor text wherever I can, and taking down the very egregious links if possible.This has involved calling and emailing webmasters, digging up old accounts and passwords, and otherwise just trying to diversify our anchor text and remove bad links. I've also gone into our site and edited some internal links (also too weighty on certain keywords) and removed other links entirely.
My rankings have gone DOWN more today. A lot. WTF does Google want? Is there something I'm doing wrong? Should we be deleted links from all private networks entirely or just trying to vary the anchor text? Any advice greatly appreciated. Thanks!
-
I would go through your list and remove the links and not try to vary anchor text at this point. I've been hit was well and moved to a domain I have held for years, but am slowly removing bad links that are on networks or painfully outside my niche. I would suggest naturally building links slowly with partial match anchor text and with the majority of the links having anchor text of your brand
-
Hi LilyRay,
Regarding your Penguin penalization, I would treat it like any other pre-Penguin link-based penalty. I have worked with many sites that have been penalized for manipulative linking, and the process to get the penalty lifted is always the same:
- REMOVE as many of the manipulative links as you can. It's the link that Google has classified as manipulative. The anchor text was just the identifier that helped them find it. Changing the anchor text of a manipulative links and leaving them up will keep the penalties associated with those links in place.
Document all of the steps that you're taking to eliminate manipulative links. Make a neat, bulleted list, with the link(s), network(s), actions taken by you, and the results. In some cases, you won't be able to remove a link. That's understandable, as they're not in your control. While you're at it, clean up ANYTHING else on your site that could be perceived as on-page spam. You're trying to prove to Google that you are a good citizen of the web, so make your site as sparkly as you can.
Once you've completed these steps, submit all of your documented work as part of your reconsideration request, to show Google that you're operating in good faith. Under normal circumstances, wait times for reconsideration requests can be anywhere from a week to a month. With the mass of reconsiderations that Google is getting right now, I'd expect a longer wait.
I'm sure this process sounds painful, and it is, but it's the only way to get back from a penalty that I've seen be effective.
-
It was partially out of my control. Pressure from higher ups for instantaneous results. I've always supported and wanted to stick to white hat seo.
-
And promise yourself never to go for the quick and easy again.
-
Google since released a 52 pack of updates since the roll out of Penguin and Panda 3.6 which you may have been stung by almost immediately after the first hit.
SEOmoz provide up to date change history of algorithm updates as soon as they are released.
Any backlinks you have which are associated to blog rings / networks - I would delete as many as you can. If the network has been identified and blacklisted by Google, then they'll be rolling out penalties for any domains that have used them. Parallel to this, build some natural links to balance out your link profile as soon as you can too.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Job Posting Page and Structured Data Issue
We have a website where we do job postings. We manually add the data to our website. The Job Postings are covered by various other websites including the original recruiting organisations. The details of the job posting remain the same, for instance, the eligibility criteria, the exam pattern, syllabus etc. We create pages where we list the jobs and keep the detailed pages which have the duplicate data disallowed in robots.txt. Lately, we have been thinking of indexing these pages as well, as the quantum of these non-indexed pages is very high. Some of our competitors have these pages indexed. But we are not sure whether doing this is gonna be the right move or if there is a safe way to deal with this. Additionally, there is this problem that some job posts have very less data like fees, age limit, salary etc which is thin content so that might contribute to poor quality issue. Secondly, we wanted to use enriched result snippets for our job postings. Google doesn't want snippets to be used on the listing page: "Put structured data on the most detailed leaf page possible. Don't add structured data to pages intended to present a list of jobs (for example, search result pages). Instead, apply structured data to the most specific page describing a single job with its relevant details." Now, how do we handle this situation? Is it safe to allow the detailed pages which have duplicate job data and sometime not so high quality data in robots.txt?
Intermediate & Advanced SEO | | dailynaukri0 -
Blank Cart Pages Showing as Duplicate, HELP
Hi Everyone, I'm seeing a bunch of URLs that look something like this [ domain.com/cart?add&id_product=42&token=776d4a08721f3d8c920e287248797547] showing as duplicate content in my Moz crawls. I think these are just blank pages for the most part. Is there anything to be concerned with here? Is there a way to clean this up? Thanks! Ricky
Intermediate & Advanced SEO | | RickyShockley0 -
Help with Robots.txt On a Shared Root
Hi, I posted a similar question last week asking about subdomains but a couple of complications have arisen. Two different websites I am looking after share the same root domain which means that they will have to share the same robots.txt. Does anybody have suggestions to separate the two on the same file without complications? It's a tricky one. Thank you in advance.
Intermediate & Advanced SEO | | Whittie0 -
Help! Unnatural Linking Partial Manual Penalty
A friend was hit with a manual penalty for unnatural links-impacts links. (see attached) I'm thinking it may be because they copied their entire wordpress.com site over to site.org/blog. (without redirecting it, so they have duplicate content as well) Out of 76+k links, nearly 11,000 are from their wordpress.com blog. If that's the case is the problem solved by upgrading within wordpress.com to redirect to site.org/blog? (then making a reconsideration request?) Or do I risk negatively affecting their site somehow? They saw a significant increase in traffic when they moved the content over but I'm thinking that was more a matter of increasing content on their site than increasing backlinks. The .org site ranks relatively well, whereas the wordpress.com blog doesn't really rank at all.Worth noting: it's a partial match, not a sitewide match. Does that negate my theory about the wordpress.com blog being the cause in any way? Since many of the links from it are sitewide? The wordpress.com blog has a header link to the .org homepage, plus individual links to it in posts. There are also three links in the header to pages on their .com website which redirects to three corresponding pages on the main .org site (the whole .com redirects). There are 23 footer links from the blog to the targeted .org pages as well. In the attached screenshot of who links most from Google Webmaster Tools, note that martindale.com links most, but it's a lawyer's site so they naturally have referring content there. Could that be a problem?Thanks everyone! 🙂M8JVEI6.jpg?1 M6gYE90.jpg
Intermediate & Advanced SEO | | kimmiedawn0 -
Panda'd - and I think I know how to fix it...
Hi, I have a non-core site that seems to have been affected by a Panda refresh in late December http://www.seomoz.org/google-algorithm-change#2012 Anyway, I couldn't figure out for the longest time why this site, which is full of high-quality, expert-level content would get dinged -- i made several moves to try and eliminate duplicate content -- even though I couldn't find evidence of the duplicate content, but it's a wordpress site so there's lots of opportunities to accidentally introduce it through archives, tags and whatnot. The classic SEO mistake I was making was I was forgetting about a type of post we were doing to facilitate one of our email campaigns. On most, sites there's always something you aren't optimizing, and that's the stuff that can really create unintended issues in google, because the decisions made on those pieces, is often more operational toward the other campaigns, than strategic to search. these posts, are thin little articles, written by humans, but the text is actually submitted to another external site, published there and then recreated as content that the email campaign links to. These posts are segregated from the normal feed on the wordpress site, and the last time I had reviewed this content, we were not using a method for creating that involved publishing it to facebook first. But, OK, so I'm going to stop indexing this content, that's a given. I believe that is the Panda issue -- I could be wrong, but it makes sense, since otherwise the site is maybe the least likely site to be affected by Panda that I've ever been involved with. Do I do anything else, after fixing a Panda issue? Is there a reconsideration request for this or something. Should I send a singing telegram to Cutts? I researched a few articles, and there wasn't much on what to do after you fixed it, but to wait. Just wondering if anyone else who fixed a Panda thang, utilized any communication channel to let google know. thanks!
Intermediate & Advanced SEO | | reallygoodstuff0 -
Do sites like these really work?
http://www.linkaloha.com/ This company claims to have a good tool. To me it looks like spam. They claim to automate link building and they claim to use Googles Panda to their advantage. This site does however use link diversity, not sure about how relevant the sources are for each keyword... Would this fall into black hat or gray area SEO?
Intermediate & Advanced SEO | | SEODinosaur0 -
Help! Is rel cononical impacting me?
Hi there. My personal site www.adamlewis.info has higher Domain Authority and Moz rank and more linking domains than the top ranking site for my name "Adam Lewis" My landing page /adam-lewis has an A Grade. Yet I am still on page 2 behind what appear to be weaker domains. The on-site report says I am not making appropriate use of Rel Cononical. This is a bit techy for me. Can anyone explain how this might or might not be affecting my ranking for "adam lewis"? Thanks guys! Adam
Intermediate & Advanced SEO | | adamlewis100 -
How to determine the correct number of ad units post-Panda
What guidelines are you using to determine the correct number of ad units? Also is it number of units per page or the size of the ads (visually)? Any additional guidance or links you can point me to regarding ads in a post-Panda world would be helpful.
Intermediate & Advanced SEO | | nicole.healthline0