How to de-index a page with a search string with the structure domain.com/?"spam"
-
The site in question was hacked years ago. All the security scans come up clean but the seo crawlers like semrush and ahrefs still show it as an indexed page. I can even click through on it and it takes me to the homepage with no 301. Where is the page and how to deindex it?
domain/com/?spam
There are multiple instances of this.
http://www.clipular.com/c/5579083284217856.png?k=Q173VG9pkRrxBl0b5prNqIozPZI
-
You are most welcome. I'm glad to hear your road to site recovery is coming along. I'm also glad to confirm that, to all of my knowledge, your understanding of the "*" operator and Disallow /?spam string is correct. One more thing:
Fetch as Google and Request Indexing
Apologies, I neglected to mention this step in my answer. It should be included. This is the best tool I'm aware of to ask Google, "hey, crawl me please." Do this after you upload your shiny new robots.txt.In GSC, under Crawl, select Fetch as Google. Then, select Fetch and Render. When status is partial or complete, click Request Indexing. There is no guarantee here, and my experience is Google does what it wants. Even so, I've seen results in less than 2 hours (full disclosure: the longest I've waited has been 3 days).
Penalty Free I agree. They cannot possibly be penalizing your site. At least, not purposefully. You have taken all recommended actions and then some to resolve site issues. Even if you do have a few bad back links floating around out there from some blackhat t3 site PBN, Penguin 4.0 should discredit that bad link juice. Your site doesn't even have the offending pages. It's just a matter of time before Google's index lines back up with your live site.
Good Work Sir,
Wipe the Index Clean,
CopyChrisSEO and the Vizergy Team -
Thanks very much for your explanation.
I have gone ahead and temporarily blocked the pages in GSC.
I am working on the robot.txt and see there are no instructions for the crawlers to skip over these urls in question.
I understand that I should use the "*" operator to alert all crawlers to disallow the pages in this format:
user-agent: *
Disallow: /?spam string
Finally, I will send the suggested edit to Google and see where that gets me. Honestly, at this point, they cannot possibly be penalized the site any worse so anything working towards cleaning up the index for the site will be a step in the right direction.
-
Hello Miamirealestatetrendsguy and fellow Mozers,
It sounds like you have had a crazy time handling this hack. Good news is, as far as I can tell from the given information, you are close to resolution. Googlebot should correct the indexed pages over time. I'm certain you would like to expedite that process. Here are three recommendations that come to mind: Remove URLs via GSC, block the offending URLs via robots.txt, and suggest edits in Google's SERPs.
Remove URLs via GSC
In GSC, under Google Index, select Remove URLs. This suppression is temporary however. Click on more information for more about that. My experience with it as been suppression for a few months. Don't worry about the time though. Our next step should take affect before your time is up.Block the Offending URLs via Robots.txt
Before you do this, be very certain what you are doing. After you are confident, list your offending URLs, edit the offending URLs as noindex nofollow in your robots.txt, and upload it. Hopefully, you can find commonalities to shorten this list and save your time.Note: I have purposefully avoided the details on how to this here because it is vital SEOs learn how to do it with full knowledge of potential risks as well as how to avoid those risks. Here are some resources:
• Google Support • Moz's Robots.txt Rundown
• Search Engine Land's Deeper LookSuggest Edits in Google's SERPs This one is iffy, and I really don't trust Google using this feedback. However, I have done it and it worked more than once. Find your offending results and send specific feedback.
Wipe that Index Clean,
CopyChrisSEO and the Vizergy Team
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
[Organization schema] Which Facebook page should be put in "sameAs" if our organization has separate Facebook pages for different countries?
We operate in several countries and have this kind of domain structure:
Technical SEO | | Telsenome
example.com/us
example.com/gb
example.com/au For our schemas we've planned to add an Organization schema on our top domain, and let all pages point to it. This introduces a problem and that is that we have a separate Facebook page for every country. Should we put one Facebook page in the "sameAs" array? Or all of our Facebook pages? Or should we skip it altogether? Only one Facebook page:
{
"@type": "Organization",
"@id": "https://example.com/org/#organization",
"name": "Org name",
"url": "https://example.com/org/",
"sameAs": [
"https://www.linkedin.com/company/xxx",
"https://www.facebook.com/xxx_us"
], All Facebook pages:
{
"@type": "Organization",
"@id": "https://example.com/org/#organization",
"name": "Org name",
"url": "https://example.com/org/",
"sameAs": [
"https://www.linkedin.com/company/xxx",
"https://www.facebook.com/xxx_us"
"https://www.facebook.com/xxx_gb"
"https://www.facebook.com/xxx_au"
], Bonus question: This reasoning springs from the thought that we only should have one Organization schema? Or can we have a multiple sub organizations?0 -
Help Center/Knowledgebase effects on SEO: Is it worth my time fixing technical issues on no-indexed subdomain pages?
We're a SaaS company and have a pretty extensive help center resource on a subdomain (help.domain.com). This has been set up and managed over a few years by someone with no knowledge of SEO, meaning technical things like 404 links, bad redirects and http/https mixes have not been paid attention to. Every page on this subdomain is set to NOT be indexed in search engines, but we do sometimes link to help pages from indexable posts on the main domain. After spending time fixing problems on our main website, our site audits now flag almost solely errors and issues on these non-indexable help center pages every week. So my question is: is it worth my time fixing technical issues on a help center subdomain that has all its pages non-indexable in search engines? I don't manage this section of the site, and so getting fixes done is a laborious process that requires going through someone else - something I'd rather only do if necessary.
Technical SEO | | mglover19880 -
Google Search Console "Text too small to read" Errors
What are the guidelines / best practices for clearing these errors? Google has some pretty vague documentation on how to handle this sort of error. User behavior metrics in GA are pretty much in line with desktop usage and don't show anything concerning Any input is appreciated! Thanks m3F3uOI
Technical SEO | | Digital_Reach2 -
De-indexed from Google
Hi Search Experts! We are just launching a new site for a client with a completely new URL. The client can not provide any access details for their existing site. Any ideas how can we get the existing site de-indexed from Google? Thanks guys!
Technical SEO | | rikmon0 -
Spam posts indexed, what to do now?
Hi, So we had a staff problem last week and we let some spam posts (cheap nike jerseys etc.) that also got indexed by Google. (We just checked and there are lik 105 already indexed) Of course we have now removed all these spam posts but what is the best practice at this point? Are we supposed to do something else to remove these from Google's index? (maybe through google webmaster tools?) We have already edited robots.txt to disallow those pages as a quick remedy. And finally, could this have done any harm? We were quite slow noticing these posts to remove them. They were there for about 12 days. thanks
Technical SEO | | Gamer070 -
Why is my office page not being indexed?
Good Morning from 24 degrees C partly cloudy wetherby UK 🙂 This page is not being indexed by Google:
Technical SEO | | Nightwing
http://www.sandersonweatherall.co.uk/office-to-let-leeds/ 1st Question Ive checked robots txt file no problems, i'm in the midst of updating the xml sitemap (it had the old one in place). It only has one link from this page http://www.sandersonweatherall.co.uk/Site-Map/ So is the reason oits not being indexed just a simple case of lack if SEO juice from inbound links so the remedy lies in routing more inbound links to the offending page? 2nd question Is the quickest way to diagnose if a web address is not being indexed to cut and paste the url in the Google search box and if it doesnt return the page theres a problem? Thanks in advance, David0 -
Search Engine Blocked by Robot Txt warnings for Filter Search result pages--Why?
Hi, We're getting 'Yellow' Search Engine Blocked by Robot Txt warnings for URLS that are in effect product search filter result pages (see link below) on our Magento ecommerce shop. Our Robot txt file to my mind is correctly set up i.e. we would not want Google to index these pages. So why does SeoMoz flag this type of page as a warning? Is there any implication for our ranking? Is there anything we need to do about this? Thanks. Here is an example url that SEOMOZ thinks that the search engines can't see. http://www.site.com/audio-books/audio-books-in-english?audiobook_genre=132 Below are the current entries for the robot.txt file. User-agent: Googlebot
Technical SEO | | languedoc
Disallow: /index.php/
Disallow: /?
Disallow: /.js$
Disallow: /.css$
Disallow: /checkout/
Disallow: /tag/
Disallow: /catalogsearch/
Disallow: /review/
Disallow: /app/
Disallow: /downloader/
Disallow: /js/
Disallow: /lib/
Disallow: /media/
Disallow: /.php$
Disallow: /pkginfo/
Disallow: /report/
Disallow: /skin/
Disallow: /utm
Disallow: /var/
Disallow: /catalog/
Disallow: /customer/
Sitemap:0 -
Www/nonwww .co.uk/.com
When I started SEO - I didn't really know what I was doing (still don't!) Just wondering if anyone can help me with this small problem. I now understand that I basically have 4 URLs www.ablemagazine.com (Page Authority: 38/100) www.ablemagazine.co.uk (Page Authority: 47/100) ablemagazine.com (Page Authority: 3/100) ablemagazine.co.uk (Page Authority: 51/100) What should be configuration be to ensure I'm not loosing masses amounts of linkjuice? At the moment I have ablemagazine.co.uk set as my default domain in webmaster tools. www.ablemagazine.com www.ablemagazine.co.uk and ablemagazine.com all 301 redirect here (I think)
Technical SEO | | craven220