Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How to fix Google index after fixing site infected with malware.
-
Hi All
Upgraded a Joomla site for a customer a couple of months ago that was infected with malware (it wasn't flagged as infected by google). Site is fine now but still noticing search queries for "cheap adobe" etc with links to http://domain.com/index.php?vc=201&Cheap_Adobe_Acrobat_xi in web master tools (about 50 in total). These url's redirect back to home page and seem to be remaining in the index (I think Joomla is doing this automatically)
Firstly, what sort of effect would these be having on on their rankings? Would they be seen by google as duplicate content for the homepage (moz doesn't report them as such as there are no internal links).
Secondly what's my best plan of attack to fix them. Should I setup 404's for them and then submit them to google? Will resubmitting the site to the index fix things?
Would appreciate any advice or suggestions on the ramifications of this and how I should fix it.
Regards, Ian
-
Thanks Tom
That's a good point. Part of my problem lies in the number of URL's with parameters (thousands). Applying status codes of any type isn't really viable.
Starting to see the url's clean up with the addition of the entries in robot.txt.
Regards
Ian
-
I would make them return a 410 not 404
410's are dead links if you use a 404 google will keep coming back to see if you fixed the 404
sending google to a 410 lets them know it's gone
http://moz.com/learn/seo/http-status-codes
all the best,
tom
-
OK Might have a solution that would at least work for my situation.
Since implementing SEF URL's on the site I have no real need for any URL's with parameters. By adding the following to robots.txt it should prevent any indexing of old pages or pages with parameters.
Disallow: /index.php?*
Tested it in webmaster tools with some of the offending URL's and it seems to work. I'll wait until the next indexing and post back or mark it as answered.
-
Thanks all for you help
A little more information and maybe a little more advice required.
Since fixing the malware http://domain.com/index.php?vc=201&Cheap_Adobe_Acrobat_xi and similar are actually no longer pages. Joomla actually sees anything after ? as a parameter and just ignores it because it no longer matches a page and hence the reason it just defaults to the home page http://domain.com/index.php. This is Joomla and probably most other content management systems default behavior. The problem here lies in the fact that google indexed that page when it was infected and it remains in the index because to google it sees a status code of 200 when re-indexing this page.
The problem is now a bit broader and has more ramifications than first thought. Any pages from the previous system that used parameters would receive a 200 status code and remain in the index. Checking url parameters in web master tools confirms this with various paramaters showing thousands of url's monitored. Keep in mind google is showing a message that there are no problems with parameters for this site.
So the advice I need now is related to url parameters in Web Master tools. The new site uses SEF URLS and so makes much less use of paramaters. How can I ensure that the old redundant pages with parameters are dropped from the index. This would involve thousands of 301's or 404's let alone trying to work them all out. There is a reset link for each parameter in webmaster tools but not much documentation as to what it does. If I reset all the parameters would that clean up the index?
I'd be interested in what others think about this issue because I feel that this might be a common problem with cms based platforms and after major changes, thousands of paramater based url's just defaulting to home and other pages probably affects the site and page ranking.
Ian
-
The search engines are retaining the indexing of the links because following them through the redirect returns a 200 server header - which to the SEs means all is well and there is a page there to index. As you note in other responses - the only way to change that is to force the server to return a 404 header as a signal to the SEs to eventually drop it.
Yes, you could use a robots.txt directive to block those specific URLs that are the target of the spam links, in order to satisfy the URL Removal Tool's requirement for allowing a removal request. That should work as a quicker solution than trying to make coding changes in Joomla (sorry, it's been about 3.5 yrs since I've done any Joomla work).
Good luck!
Paul
[EDIT: Gah...ignore the P.S. as I didn't notice you don't have an easy way to get redirects into the Zeus server before Joomla kicks in. Sorry]
P.S. A final quick option would be to write a redirect in htaccess to 301-redirect the fake URLs to a real 404 page. This would kick in before Joomla got a chance to interfere with its pseudo-redirect.
-
You're right, I guess I was focused on the index. Moz isn't showing any external links to these pages and neither is webmaster tools. My feeling is that google is retaining them for some reason, maybe just the keywords in the url?
-
I've checked the source of the visits and they are only coming form google searches for "cheap adobe" and the like. The original malware used the site to get these searches into the index and then direct them to other sites/pages.
Being a Zeus server it doesn't use htaccess, my task would be a lot simple if it did. It has an alternative rewrite file but documentation is scarce on using it for 404's.
I'll keep researching.
-
That means no body clicks on them, but how did google find them? This is not evidence there is no links, just that no one has visited your site thought them
-
Thanks Paul
I've checked analytics and the only source of these url's is google organic searches, not external sites. I think unfortunately my problem is the dynamic nature of Joomla and a combination of a number of factors that are causing it to do this in an SEO unfriendly way.
I think my biggest challenge is getting the URL's to 404 before I submit them to the web master removal tool (which my research tells me needs to be done before you submit). I think I read there might be a robots.txt option so I'll look into that.
Ian
-
These pages may have links from other spam sites, you don't want them to return a 200.
You want them to 404, in joomla you can make the site use htaccess or not, make sure it dose and 404 the pages there. -
Thanks Alan
This seems to be done by the combination of Joomla/Zeus and the redirection manager. No longer infected, the only visits are from organic searches from google and it's been a couple of months. Whatever the reason Joomla feels it shouldn't 404 these pages and just displays (not 301 redirects them) to the home page.
My feeling is that these URL's in the index and the visits from them probably aren't doing the site any good.
-
Thanks Dave
I think this might be a good option but I have a couple of problem with trying to achieve this. It's a joomla cms running on a zeus server with a Search Engine Friendly URL plugin running. I think that is possibly the worst combination of technologies for SEO in history. The combination of url rewrites in zeus and the redirection manager in Zeus just display the home page with the dodgey URL and give it a 200 status code. I think this is why google is taking so long to drop it from the index.
Ian
-
You absolutely do NOT want to redirect these links to the home page, Ian! These are spam links, coming from completely unrelated sites. They are Google's very definition of unnatural links and 301-redirecting them to your home page also redirects their potential damage to your home page.
You want them to return 404 status as quickly as possible. I'd also be tempted to use the Webmaster Tools remove tool to try to speed up the process, especially if these junk links currently form a large percentage of your overall link profile. (You'll need to find & remove the redirect that currently re-points them to the home page too, for the 404 header to do it's job of telling the search engines to drop the page from their indexes.)
As far as rankings issues, this isn't a potential dupe content issue, it's a damaging unnatural links issue, which is even more significant. These are the kinds of links that could lead to at least algorithmic penalty, or worst case, manual penalty. Either way, these penalties are vastly harder to fix after the fact than to avoid them in the first place.
In addition to the steps above designed to make it clear those links don't belong to your site, I'd keep a good record of the links, their originating domains, and when & how they were originally created due to the malware attack and your fix. That way you have essential documentation should you receive a penalty and need to submit a reinclusion request.
Hope that answers your questions?
Paul
-
why are they redirecting back to home page? do you redirect them or are you still infected?
I would make sure they 404
-
The easiest way would be a permanent re-direct on the offending URLs.
Check the incoming variable i.e. vc and permanently re-direct if it's an offending using 301.Google when seeing the 301 will drop the URL from the index.
There is a URL removal tool in Google Web Master Tools if the URL contains any personal information.
I had a similar issue a few days ago, the index is already starting to clear up, from a corrupt XML site map.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google has deindexed a page it thinks is set to 'noindex', but is in fact still set to 'index'
A page on our WordPress powered website has had an error message thrown up in GSC to say it is included in the sitemap but set to 'noindex'. The page has also been removed from Google's search results. Page is https://www.onlinemortgageadvisor.co.uk/bad-credit-mortgages/how-to-get-a-mortgage-with-bad-credit/ Looking at the page code, plus using Screaming Frog and Ahrefs crawlers, the page is very clearly still set to 'index'. The SEO plugin we use has not been changed to 'noindex' the page. I have asked for it to be reindexed via GSC but I'm concerned why Google thinks this page was asked to be noindexed. Can anyone help with this one? Has anyone seen this before, been hit with this recently, got any advice...?
Technical SEO | | d.bird0 -
Do URLs with canonical tags get indexed by Google?
Hi, we re-branded and launched a new website in February 2016. In June we saw a steep drop in the number of URLs indexed, and there have continued to be smaller dips since. We started an account with Moz and found several thousand high priority crawl errors for duplicate pages and have since fixed those with canonical tags. However, we are still seeing the number of URLs indexed drop. Do URLs with canonical tags get indexed by Google? I can't seem to find a definitive answer on this. A good portion of our URLs have canonical tags because they are just events with different dates, but otherwise the content of the page is the same.
Technical SEO | | zasite0 -
Google stopped crawling my site. Everybody is stumped.
This has stumped the Wordpress staff and people in the Google Webmasters forum. We are in Google News (have been for years), and so new posts are crawled immediately. On Feb 17-18 Crawl Stats dropped 85%, and new posts were no longer indexed (not appearing on News or search). Data highlighter attempts return "This URL could not be found in Google's index." No manual actions by Google. No changes to the website; no custom CSS. No Site Errors or new URL errors. No sitemap problems (resubmitting didn't help). We're on wordpress.com, so no odd code. We can see the robot.txt file. Other search engines can see us, as can social media websites. Older posts still index, but loss of News is a big hit. Also, I think overall Google referrals are dropping. We can Fetch the URL for a new post, and many hours later it appears on Google and News, and we can then use Data Highlighter. It's now 6 days and no recovery. Everybody is stumped. Any ideas? I just joined, so this might be the wrong venue. If so, apologies.
Technical SEO | | Editor-FabiusMaximus_Website0 -
Correct linking to the /index of a site and subfolders: what's the best practice? link to: domain.com/ or domain.com/index.html ?
Dear all, starting with my .htaccess file: RewriteEngine On
Technical SEO | | inlinear
RewriteCond %{HTTP_HOST} ^www.inlinear.com$ [NC]
RewriteRule ^(.*)$ http://inlinear.com/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^./index.html
RewriteRule ^(.)index.html$ http://inlinear.com/ [R=301,L] 1. I redirect all URL-requests with www. to the non www-version...
2. all requests with "index.html" will be redirected to "domain.com/" My questions are: A) When linking from a page to my frontpage (home) the best practice is?: "http://domain.com/" the best and NOT: "http://domain.com/index.php" B) When linking to the index of a subfolder "http://domain.com/products/index.php" I should link also to: "http://domain.com/products/" and not put also the index.php..., right? C) When I define the canonical ULR, should I also define it just: "http://domain.com/products/" or in this case I should link to the definite file: "http://domain.com/products**/index.php**" Is A) B) the best practice? and C) ? Thanks for all replies! 🙂
Holger0 -
How does Google find /feed/ at the end of all pages on my site?
Hi! In Google Webmaster Tools I find *.../feed/ as a 404 page in crawl errors. The problem is that none of these pages exist and they have no inbound links (except the start page). FYI, it´s a wordpress site. Example: www.mysite.com/subpage1/feed/ www.mysite.com/subpage2/feed/ www.mysite.com/subpage3/feed/ etc Does Google search for /feed/ by default or why do I keep getting these 404´s every day?
Technical SEO | | Vivamedia0 -
NoIndex/NoFollow pages showing up when doing a Google search using "Site:" parameter
We recently launched a beta version of our new website in a subdomain of our existing site. The existing site is www.fonts.com with the beta living at new.fonts.com. We do not want Google to crawl the new site until it's out of beta so we have added the following on all pages: However, one of our team members noticed that google is displaying results from new.fonts.com when doing an "site:new.fonts.com" search (see attached screenshot). Is it possible that Google is indexing the content despite the noindex, nofollow tags? We have double checked the syntax and it seems correct except the trailing "/". I know Google still crawls noindexed pages, however, the fact that they're showing up in search results using the site search syntax is unsettling. Any thoughts would be appreciated! DyWRP.png
Technical SEO | | ChrisRoberts-MTI0 -
Does google use the wayback machine to determine the age of a site?
I have a site that I had removed from the wayback machine because I didn't want old versions to show. However I noticed that in many seo tools the site now always shows a domain age of zero instead of 6 years ago when I registered it. My question is what do the actual search engines use to determine age when they factor it into the ranking algorithm? By having it removed from the wayback machine, does that make the search engines think the site is brand new? Thanks
Technical SEO | | FastLearner0