Spam pages / content created due to hack. 404 cleanup.
-
A hosting company's server was hacked and one of our customer's sites was injected with 7,000+ pages of fake, bogus, promotional content.
Server was patched and spammy content removed from the server.
Reviewing Google Webmaster's Tools we have all the hacked pages showing up as 404's and have a severe drop in impressions, rank and traffic. GWT also has 'Some manual actions apply to specific pages, sections, or links'...
What do you recommend for:
- Cleaning up 404's to spammy pages? (I am not sure redirect to home page is a right thing to do - is it?)
- Cleaning up links that were created off site to the spam pages
- Getting rank bank // what would you do in addition to the above?
-
You want those old spam pages to have a 410 code - gone (permanently). I'm not 100% sure how you will achieve this though... I'd speak to your hosting company and/or web developer.
404 code means the page is 'not found' which isn't the same as 410, which tells the search engines that the page has gone forever, so they won't keep looking for it.
Hope this helps!
Amelia
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Keyword research, creating copy, fixing on-page optimisation - what next?
Hello - Wondered if I could get people's thoughts. We/I have started working on a client's website to improve everything - a general overhaul across SEO, on-page optimisation etc. I'm relatively new to this although picking things up and learning on the job which is great, and Moz is so helpful! So far we have conducted a review of the website, created a large list of keywords and analysed these, started overhauling the copy and adding the new keywords within this, have plans to overhaul the other elements of the site (headings, tags etc) and improve the design, functionality and customer journey through the website. My question is: where do I go from here in terms of keywords and SEO? Is it a case of plugging in the keywords we've researched, watch how they perform, and then switch things up with different keywords if they aren't performing as well as we expected? Is it really a lot of trial and error or is there an exact science behind it that I'm missing? I just feel a little as though we've pulled these keywords out of thin-air to a degree, and are adding them into our copy because the numbers on Moz show they should perform well, and they are what we are trying to promote on the website. But I don't know if this is right?! Perhaps I'm over-thinking it...
Technical SEO | | WhitewallGlasgow0 -
HTML snapshot creating soft 404
Has anyone any experience with HTML snapshots? We have a recruitment client that has HTML snapshots against all job pages as they are built with AJAX. The pages naturally die after around four weeks (the job vacancy runs out) and whilst the AJAX version of the page hard 404s, the HTML snapshot version returns a soft 404. How can we get it to mirror the dead page with 404 status?
Technical SEO | | AndrewAkesson0 -
Creating a Landing Page with a Separate Domain to Control Bounce Rate
I work with a unique situation where we have a site that gets tons of free traffic from internal free resources. We do make revenue from this traffic, but due to its nature, it has a high bounce rate. Data shows that once someone from this source does click a second page, they are engaged, so they either bounce or click multiple pages. After testing various landing pages, I've determined that the best solution would be to create a landing page on a separate domain and hide it from the search engines (to prevent duplicate content and the appearance of link farming). The theory is that once they click through to the site, they will bounce at a lower rate and improve the stats of the website. The landing page would essentially filter out this bad traffic. My question is, how sound is this theory? Will this cause any issues with Google or any other search engines?
Technical SEO | | jhacker0 -
Avoiding duplicate content on product pages?
Hi, I'm creating a bunch of product pages for courses for a university and I'm concerned about duplicate content penalties. While the page names are different and some of the test is different, much of the text is the same between pairs of pages. I.e. a BA and an MA in a particular subject (say 'hairdressing' will have the same subject descriptions, school introduction paragraph, industry overview paragraph etc. 1. Is this a problem? In a site with 100 pages, if sets of 2 pages have about 50% identical content... 2. If it is a problem, is there anything I can do, other than rewrite the text? 3. From a search perspective, would both pages show up in search results in searches related to 'hairdressing courses' 'study hairdressing' etc? Thanks!
Technical SEO | | AISFM0 -
Search/Search Results Page & Duplicate Content
If you have a page whose only purpose is to allow searches and the search results can be generated by any keyword entered, should all those search result urls be no index or rel canonical? Thanks.
Technical SEO | | cakelady0 -
I am trying to correct error report of duplicate page content. However I am unable to find in over 100 blogs the page which contains similar content to the page SEOmoz reported as having similar content is my only option to just dlete the blog page?
I am trying to correct duplicate content. However SEOmoz only reports and shows the page of duplicate content. I have 5 years worth of blogs and cannot find the duplicate page. Is my only option to just delete the page to improve my rankings. Brooke
Technical SEO | | wianno1680 -
Once duplicate content found, worth changing page or forget it?
Hi, the SEOmoz crawler has found over 1000 duplicate pages on my site. The majority are based on location and unfortunately I didn't have time to add much location based info. My question is, if Google has already discovered these, determined they are duplicate, and chosen the main ones to show on the SERPS, is it worth me updating all of them with localalized information so Google accepts the changes and maybe considers them different pages? Or do you think they'll always be considered duplicates now?
Technical SEO | | SpecialCase0 -
Mitigating duplicate page content on dynamic sites such as social networks and blogs.
Hello, I recently did an SEOMoz crawl for a client site. As it typical, the most common errors were duplicate page title and duplicate content. The client site is a custom social network for researchers. Most of the pages that showing as duplicate are simple variations of each user's profile such as comment sections, friends pages, and events. So my question is how can we limit duplicate content errors for a complex site like this. I already know about the rel canonical tag, and rel next tag, but I'm not sure if either of these will do the job. Also, I don't want to lose potential links/link juice for good pages. Are there ways of using the "noindex" tag in batches? For instance: noindex all urls containing this character? Or do most CMS allow this to be done systematically? Anyone with experience doing SEO for a custom Social Network or Forum, please advise. Thanks!!!
Technical SEO | | BPIAnalytics0