Question spam malware causing many indexed pages
-
Hey Mozzers,
I was speaking with a friend today about a site that he has been working on that was infected when he began working on it. Here (https://www.google.ca/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=site:themeliorist.ca) you can see that the site has 4400 indexed pages, but if you scroll down you will see some pages such as /pfizer-viagra-samples/ or /dapoxetine-kentucky/. All of these pages are returning 404 errors, and I ran it through SEO spider just to see if any of these pages would show up, and they don't.
This is not an issue for a client, but I am just curious why these pages are still hanging around in the index. Maybe others have experience this issue too.
Cheers,
-
Hey
You can use the URL removal tool to expedite this and it is one of the few times that Google actually recommends you do so: https://support.google.com/webmasters/answer/1269119?hl=en. Likewise, the URLs will eventually fall away but it can take some time.
The most important thing here is to ensure the site is 100% protected going forwards and does not get reinfected. The pharma hack often has three backdoors in WP itself, a plugin and often the database. These can go for months without being called and suddenly, the site is reinfected again and they are getting better all the time at making this harder and harder to clean up.
This is worth a read:
http://blog.sucuri.net/2010/07/understanding-and-cleaning-the-pharma-hack-on-wordpress.htmlWe often also see a second degree payload with some black hat SEO and outbound links on the site so even when you get rid of the main problem, you may have a few small residual problems. I would suggest an SEO audit, some pro active security and at very least review the outbound links from the site to make sure they are all legit (Screaming Frog is your friend here and it will show external links + linking page).
Hope that helps!
Marcus -
Sometimes search engines do not update as frequently as we would like them to. Since you've already verified that these pages no longer exist on your site, I would also suggest that you actively try to have them removed using your Google Webmaster Tools account.
Google source: https://support.google.com/webmasters/answer/1663419?hl=en
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate H1 Question & Landing Page help
Hi We have 2 H1's on this page http://www.key.co.uk/en/key/heavy-duty-shelving Our webmaster has put one as display:none - but isn't this just going to look like we're keyword spamming & trying to hide it? OK now I;m looking I am seeing more wrong with this page... The width buttons at the top as h2's...& they link to facet pages? Won't this just waste crawl budget? and every product title/user guide title etc are all H2's.... I just need to put a plan together to give to our dev team on what should be updated Any tips would be great. Becky
Intermediate & Advanced SEO | | BeckyKey0 -
Fetch as Google -- Does not result in pages getting indexed
I run a exotic pet website which currently has several types of species of reptiles. It has done well in SERP for the first couple of types of reptiles, but I am continuing to add new species and for each of these comes the task of getting ranked and I need to figure out the best process. We just released our 4th species, "reticulated pythons", about 2 weeks ago, and I made these pages public and in Webmaster tools did a "Fetch as Google" and index page and child pages for this page: http://www.morphmarket.com/c/reptiles/pythons/reticulated-pythons/index While Google immediately indexed the index page, it did not really index the couple of dozen pages linked from this page despite me checking the option to crawl child pages. I know this by two ways: first, in Google Webmaster Tools, if I look at Search Analytics and Pages filtered by "retic", there are only 2 listed. This at least tells me it's not showing these pages to users. More directly though, if I look at Google search for "site:morphmarket.com/c/reptiles/pythons/reticulated-pythons" there are only 7 pages indexed. More details -- I've tested at least one of these URLs with the robot checker and they are not blocked. The canonical values look right. I have not monkeyed really with Crawl URL Parameters. I do NOT have these pages listed in my sitemap, but in my experience Google didn't care a lot about that -- I previously had about 100 pages there and google didn't index some of them for more than 1 year. Google has indexed "105k" pages from my site so it is very happy to do so, apparently just not the ones I want (this large value is due to permutations of search parameters, something I think I've since improved with canonical, robots, etc). I may have some nofollow links to the same URLs but NOT on this page, so assuming nofollow has only local effects, this shouldn't matter. Any advice on what could be going wrong here. I really want Google to index the top couple of links on this page (home, index, stores, calculator) as well as the couple dozen gene/tag links below.
Intermediate & Advanced SEO | | jplehmann0 -
We are switching our CMS local pages from a subdomain approach to a subfolder approach. What's the best way to handle this? Should we redirect every local subdomain page to its new subfolder page?
We are looking to create a new subfolder approach within our website versus our current subdomain approach. How should we go about handling this politely as to not lose everything we've worked on up to this point using the subdomain approach? Do we need to redirect every subdomain URL to the new subfolder page? Our current local pages subdomain set up: stores.websitename.com How we plan on adding our new local subfolder set-up: websitename.com/stores/state/city/storelocation Any and all help is appreciated.
Intermediate & Advanced SEO | | SEO.CIC0 -
Rel=next/prev for paginated pages then no need for "no index, follow"?
I have a real estate website and use rel=next/prev for paginated real estate result pages. I understand "no index, follow" is not needed for the paginated pages. However, my case is a bit unique: this is real estate site where the listings also show on competitors sites. So, I thought, if I "no index, follow" the paginated pages that would reduce the amount of duplicate content on my site and ultimately support my site ranking well. Again, I understand "no index, follow" is not needed for paginated pages when using rel=next/prev, but since my content will probably be considered fairly duplicate, I question if I should do anyway.
Intermediate & Advanced SEO | | khi50 -
Index Pages become No-Index
Hi Mozzers, Here is the scenario: I created a landing page targeting Holiday keywords for the holiday season. The page has been crawled and indexed - I see my landing page in the SERP. However, because of the CMS layout, since the Holiday is over and I don't want it to be displayed on the homepage, i have to remove the page from hp which makes it no-index (don't ask why, it's how the CMS was built). Question: How does this affect this LP's search? Since it's already crawled and etc. will it still be on the SERP after i change the page to no-index? If I remove the no-index next year for the holiday season, how does this all play out? Any insights or information provided will be appreciated. Thank you!
Intermediate & Advanced SEO | | TommyTan0 -
Is it possible to get a list of pages indexed in Google?
Is there a tool that will give me a list of pages on my site that are indexed in Google?
Intermediate & Advanced SEO | | rise10 -
Disallowed Pages Still Showing Up in Google Index. What do we do?
We recently disallowed a wide variety of pages for www.udemy.com which we do not want google indexing (e.g., /tags or /lectures). Basically we don't want to spread our link juice around to all these pages that are never going to rank. We want to keep it focused on our core pages which are for our courses. We've added them as disallows in robots.txt, but after 2-3 weeks google is still showing them in it's index. When we lookup "site: udemy.com", for example, Google currently shows ~650,000 pages indexed... when really it should only be showing ~5,000 pages indexed. As another example, if you search for "site:udemy.com/tag", google shows 129,000 results. We've definitely added "/tag" into our robots.txt properly, so this should not be happening... Google showed be showing 0 results. Any ideas re: how we get Google to pay attention and re-index our site properly?
Intermediate & Advanced SEO | | udemy0 -
Not sure why Home page is outranked by less optimized internal pages.
We launched our website just three weeks ago, and one of our primary keyword phrases is "e-business consultants". Here's what I don't get. Our home page is the page most optimized around this search phrase. Using SEOmoz On-Page Optimization tool, the home page scores an "A". And yet it doesn't rank in the top 50 on Google Canada, although two other INTERNAL pages - www.ebusinessconsultants.ca/about/consulting-team/ & /www.ebusinessconsultants.ca/about/consulting-approach/ - rank 5 & 6 on Google Canada, even though they only score a grade "C" for on-page optimization for this keyword phrase. I've always understood that the home page is the most powerful page. Why are these others outranking it? I checked the crawl and Google Webmaster, and there is no obvious problem on the home page. Is this because the site is so new? It goes against all previous experience I've had in similar situation. Any guidance/ insight would be highly appreciated!!
Intermediate & Advanced SEO | | axelk0