Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to fix Google index after fixing site infected with malware.
-
Hi All
Upgraded a Joomla site for a customer a couple of months ago that was infected with malware (it wasn't flagged as infected by google). Site is fine now but still noticing search queries for "cheap adobe" etc with links to http://domain.com/index.php?vc=201&Cheap_Adobe_Acrobat_xi in web master tools (about 50 in total). These url's redirect back to home page and seem to be remaining in the index (I think Joomla is doing this automatically)
Firstly, what sort of effect would these be having on on their rankings? Would they be seen by google as duplicate content for the homepage (moz doesn't report them as such as there are no internal links).
Secondly what's my best plan of attack to fix them. Should I setup 404's for them and then submit them to google? Will resubmitting the site to the index fix things?
Would appreciate any advice or suggestions on the ramifications of this and how I should fix it.
Regards, Ian
-
Thanks Tom
That's a good point. Part of my problem lies in the number of URL's with parameters (thousands). Applying status codes of any type isn't really viable.
Starting to see the url's clean up with the addition of the entries in robot.txt.
Regards
Ian
-
I would make them return a 410 not 404
410's are dead links if you use a 404 google will keep coming back to see if you fixed the 404
sending google to a 410 lets them know it's gone
http://moz.com/learn/seo/http-status-codes
all the best,
tom
-
OK Might have a solution that would at least work for my situation.
Since implementing SEF URL's on the site I have no real need for any URL's with parameters. By adding the following to robots.txt it should prevent any indexing of old pages or pages with parameters.
Disallow: /index.php?*
Tested it in webmaster tools with some of the offending URL's and it seems to work. I'll wait until the next indexing and post back or mark it as answered.
-
Thanks all for you help
A little more information and maybe a little more advice required.
Since fixing the malware http://domain.com/index.php?vc=201&Cheap_Adobe_Acrobat_xi and similar are actually no longer pages. Joomla actually sees anything after ? as a parameter and just ignores it because it no longer matches a page and hence the reason it just defaults to the home page http://domain.com/index.php. This is Joomla and probably most other content management systems default behavior. The problem here lies in the fact that google indexed that page when it was infected and it remains in the index because to google it sees a status code of 200 when re-indexing this page.
The problem is now a bit broader and has more ramifications than first thought. Any pages from the previous system that used parameters would receive a 200 status code and remain in the index. Checking url parameters in web master tools confirms this with various paramaters showing thousands of url's monitored. Keep in mind google is showing a message that there are no problems with parameters for this site.
So the advice I need now is related to url parameters in Web Master tools. The new site uses SEF URLS and so makes much less use of paramaters. How can I ensure that the old redundant pages with parameters are dropped from the index. This would involve thousands of 301's or 404's let alone trying to work them all out. There is a reset link for each parameter in webmaster tools but not much documentation as to what it does. If I reset all the parameters would that clean up the index?
I'd be interested in what others think about this issue because I feel that this might be a common problem with cms based platforms and after major changes, thousands of paramater based url's just defaulting to home and other pages probably affects the site and page ranking.
Ian
-
The search engines are retaining the indexing of the links because following them through the redirect returns a 200 server header - which to the SEs means all is well and there is a page there to index. As you note in other responses - the only way to change that is to force the server to return a 404 header as a signal to the SEs to eventually drop it.
Yes, you could use a robots.txt directive to block those specific URLs that are the target of the spam links, in order to satisfy the URL Removal Tool's requirement for allowing a removal request. That should work as a quicker solution than trying to make coding changes in Joomla (sorry, it's been about 3.5 yrs since I've done any Joomla work).
Good luck!
Paul
[EDIT: Gah...ignore the P.S. as I didn't notice you don't have an easy way to get redirects into the Zeus server before Joomla kicks in. Sorry]
P.S. A final quick option would be to write a redirect in htaccess to 301-redirect the fake URLs to a real 404 page. This would kick in before Joomla got a chance to interfere with its pseudo-redirect.
-
You're right, I guess I was focused on the index. Moz isn't showing any external links to these pages and neither is webmaster tools. My feeling is that google is retaining them for some reason, maybe just the keywords in the url?
-
I've checked the source of the visits and they are only coming form google searches for "cheap adobe" and the like. The original malware used the site to get these searches into the index and then direct them to other sites/pages.
Being a Zeus server it doesn't use htaccess, my task would be a lot simple if it did. It has an alternative rewrite file but documentation is scarce on using it for 404's.
I'll keep researching.
-
That means no body clicks on them, but how did google find them? This is not evidence there is no links, just that no one has visited your site thought them
-
Thanks Paul
I've checked analytics and the only source of these url's is google organic searches, not external sites. I think unfortunately my problem is the dynamic nature of Joomla and a combination of a number of factors that are causing it to do this in an SEO unfriendly way.
I think my biggest challenge is getting the URL's to 404 before I submit them to the web master removal tool (which my research tells me needs to be done before you submit). I think I read there might be a robots.txt option so I'll look into that.
Ian
-
These pages may have links from other spam sites, you don't want them to return a 200.
You want them to 404, in joomla you can make the site use htaccess or not, make sure it dose and 404 the pages there. -
Thanks Alan
This seems to be done by the combination of Joomla/Zeus and the redirection manager. No longer infected, the only visits are from organic searches from google and it's been a couple of months. Whatever the reason Joomla feels it shouldn't 404 these pages and just displays (not 301 redirects them) to the home page.
My feeling is that these URL's in the index and the visits from them probably aren't doing the site any good.
-
Thanks Dave
I think this might be a good option but I have a couple of problem with trying to achieve this. It's a joomla cms running on a zeus server with a Search Engine Friendly URL plugin running. I think that is possibly the worst combination of technologies for SEO in history. The combination of url rewrites in zeus and the redirection manager in Zeus just display the home page with the dodgey URL and give it a 200 status code. I think this is why google is taking so long to drop it from the index.
Ian
-
You absolutely do NOT want to redirect these links to the home page, Ian! These are spam links, coming from completely unrelated sites. They are Google's very definition of unnatural links and 301-redirecting them to your home page also redirects their potential damage to your home page.
You want them to return 404 status as quickly as possible. I'd also be tempted to use the Webmaster Tools remove tool to try to speed up the process, especially if these junk links currently form a large percentage of your overall link profile. (You'll need to find & remove the redirect that currently re-points them to the home page too, for the 404 header to do it's job of telling the search engines to drop the page from their indexes.)
As far as rankings issues, this isn't a potential dupe content issue, it's a damaging unnatural links issue, which is even more significant. These are the kinds of links that could lead to at least algorithmic penalty, or worst case, manual penalty. Either way, these penalties are vastly harder to fix after the fact than to avoid them in the first place.
In addition to the steps above designed to make it clear those links don't belong to your site, I'd keep a good record of the links, their originating domains, and when & how they were originally created due to the malware attack and your fix. That way you have essential documentation should you receive a penalty and need to submit a reinclusion request.
Hope that answers your questions?
Paul
-
why are they redirecting back to home page? do you redirect them or are you still infected?
I would make sure they 404
-
The easiest way would be a permanent re-direct on the offending URLs.
Check the incoming variable i.e. vc and permanently re-direct if it's an offending using 301.Google when seeing the 301 will drop the URL from the index.
There is a URL removal tool in Google Web Master Tools if the URL contains any personal information.
I had a similar issue a few days ago, the index is already starting to clear up, from a corrupt XML site map.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Google Search Console Still Reporting Errors After Fixes
Hello, I'm working on a website that was too bloated with content. We deleted many pages and set up redirects to newer pages. We also resolved an unreasonable amount of 400 errors on the site. I also removed several ancient sitemaps that listed content deleted years ago that Google was crawling. According to Moz and Screaming Frog, these errors have been resolved. We've submitted the fixes for validation in GSC, but the validation repeatedly fails. What could be going on here? How can we resolve these error in GSC.
Technical SEO | | tif-swedensky0 -
Should I "no-index" two exact pages on Google results?
Hello everyone, I recently started a new wordpress website and created a static homepage. I noticed that on Google search results, there are two different URLs landing on same content page. I've attached an image to explain what I saw. Should I "no-index" the page url? Google url.JPG In this picture, the first result is the homepage and I try to rank for that page. The last result is landing on same content with different URL. So, should I no-index last result as shown in image?
Technical SEO | | amanda59640 -
Google has deindexed a page it thinks is set to 'noindex', but is in fact still set to 'index'
A page on our WordPress powered website has had an error message thrown up in GSC to say it is included in the sitemap but set to 'noindex'. The page has also been removed from Google's search results. Page is https://www.onlinemortgageadvisor.co.uk/bad-credit-mortgages/how-to-get-a-mortgage-with-bad-credit/ Looking at the page code, plus using Screaming Frog and Ahrefs crawlers, the page is very clearly still set to 'index'. The SEO plugin we use has not been changed to 'noindex' the page. I have asked for it to be reindexed via GSC but I'm concerned why Google thinks this page was asked to be noindexed. Can anyone help with this one? Has anyone seen this before, been hit with this recently, got any advice...?
Technical SEO | | d.bird0 -
Site Audit Tools Not Picking Up Content Nor Does Google Cache
Hi Guys, Got a site I am working with on the Wix platform. However site audit tools such as Screaming Frog, Ryte and even Moz's onpage crawler show the pages having no content, despite them having 200 words+. Fetching the site as Google clearly shows the rendered page with content, however when I look at the Google cached pages, they also show just blank pages. I have had issues with nofollow, noindex on here, but it shows the meta tags correct, just 0 content. What would you look to diagnose? I am guessing some rogue JS but why wasn't this picked up on the "fetch as Google".
Technical SEO | | nezona0 -
Google will index us, but Bing won't. Why?
Bing is crawling our site, but not indexing it, and we cannot figure out why -- plus it's being indexed fine in Google. Any ideas on what the issue with Bing might be? Here's are some details to let you know what we've already checked/established: We have 4 301’s and the rest of our site checks out We’ve already established our Robots is ok, and that we are fixing our site map/it's in fine shape We do not see anything blocking bingbot access to the site There is no varnish or any load balancers, so nothing on that end that would be blocking the access We also don't see any rules in the apache or the .htaccess config that would be blocking the access
Technical SEO | | Alex_RevelInteractive0 -
De-indexed from Google
Hi Search Experts! We are just launching a new site for a client with a completely new URL. The client can not provide any access details for their existing site. Any ideas how can we get the existing site de-indexed from Google? Thanks guys!
Technical SEO | | rikmon0 -
How does Google Crawl Multi-Regional Sites?
I've been reading up on this on Webmaster Tools but just wanted to see if anyone could explain it a bit better. I have a website which is going live soon which is going to be set up to redirect to a localised URL based on the IP address i.e. NZ IP ranges will go to .co.nz, Aus IP addresses would go to .com.au and then USA or other non-specified IP addresses will go to the .com address. There is a single CMS installation for the website. Does this impact the way in which Google is able to search the site? Will all domains be crawled or just one? Any help would be great - thanks!
Technical SEO | | lemonz0 -
How to get Google to index another page
Hi, I will try to make my question clear, although it is a bit complex. For my site the most important keyword is "Insurance" or at least the danish variation of this. My problem is that Google are'nt indexing my frontpage on this, but are indexing a subpage - www.mydomain.dk/insurance instead of www.mydomain.dk. My link bulding will be to subpages and to my main domain, but i wont be able to get that many links to www.mydomain.dk/insurance. So im interested in making my frontpage the page that is my main page for the keyword insurance, but without just blowing the traffic im getting from the subpage at the moment. Is there any solutions to do this? Thanks in advance.
Technical SEO | | Petersen110