How to de-index a page with a search string with the structure domain.com/?"spam"
-
The site in question was hacked years ago. All the security scans come up clean but the seo crawlers like semrush and ahrefs still show it as an indexed page. I can even click through on it and it takes me to the homepage with no 301. Where is the page and how to deindex it?
domain/com/?spam
There are multiple instances of this.
http://www.clipular.com/c/5579083284217856.png?k=Q173VG9pkRrxBl0b5prNqIozPZI
-
You are most welcome. I'm glad to hear your road to site recovery is coming along. I'm also glad to confirm that, to all of my knowledge, your understanding of the "*" operator and Disallow /?spam string is correct. One more thing:
Fetch as Google and Request Indexing
Apologies, I neglected to mention this step in my answer. It should be included. This is the best tool I'm aware of to ask Google, "hey, crawl me please." Do this after you upload your shiny new robots.txt.In GSC, under Crawl, select Fetch as Google. Then, select Fetch and Render. When status is partial or complete, click Request Indexing. There is no guarantee here, and my experience is Google does what it wants. Even so, I've seen results in less than 2 hours (full disclosure: the longest I've waited has been 3 days).
Penalty Free I agree. They cannot possibly be penalizing your site. At least, not purposefully. You have taken all recommended actions and then some to resolve site issues. Even if you do have a few bad back links floating around out there from some blackhat t3 site PBN, Penguin 4.0 should discredit that bad link juice. Your site doesn't even have the offending pages. It's just a matter of time before Google's index lines back up with your live site.
Good Work Sir,
Wipe the Index Clean,
CopyChrisSEO and the Vizergy Team -
Thanks very much for your explanation.
I have gone ahead and temporarily blocked the pages in GSC.
I am working on the robot.txt and see there are no instructions for the crawlers to skip over these urls in question.
I understand that I should use the "*" operator to alert all crawlers to disallow the pages in this format:
user-agent: *
Disallow: /?spam string
Finally, I will send the suggested edit to Google and see where that gets me. Honestly, at this point, they cannot possibly be penalized the site any worse so anything working towards cleaning up the index for the site will be a step in the right direction.
-
Hello Miamirealestatetrendsguy and fellow Mozers,
It sounds like you have had a crazy time handling this hack. Good news is, as far as I can tell from the given information, you are close to resolution. Googlebot should correct the indexed pages over time. I'm certain you would like to expedite that process. Here are three recommendations that come to mind: Remove URLs via GSC, block the offending URLs via robots.txt, and suggest edits in Google's SERPs.
Remove URLs via GSC
In GSC, under Google Index, select Remove URLs. This suppression is temporary however. Click on more information for more about that. My experience with it as been suppression for a few months. Don't worry about the time though. Our next step should take affect before your time is up.Block the Offending URLs via Robots.txt
Before you do this, be very certain what you are doing. After you are confident, list your offending URLs, edit the offending URLs as noindex nofollow in your robots.txt, and upload it. Hopefully, you can find commonalities to shorten this list and save your time.Note: I have purposefully avoided the details on how to this here because it is vital SEOs learn how to do it with full knowledge of potential risks as well as how to avoid those risks. Here are some resources:
• Google Support • Moz's Robots.txt Rundown
• Search Engine Land's Deeper LookSuggest Edits in Google's SERPs This one is iffy, and I really don't trust Google using this feedback. However, I have done it and it worked more than once. Find your offending results and send specific feedback.
Wipe that Index Clean,
CopyChrisSEO and the Vizergy Team
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages being flagged in Search Console as having a "no-index" tag, do not have a meta robots tag??
Hi, I am running a technical audit on a site which is causing me a few issues. The site is small and awkwardly built using lots of JS, animations and dynamic URL extensions (bit of a nightmare). I can see that it has only 5 pages being indexed in Google despite having over 25 pages submitted to Google via the sitemap in Search Console. The beta Search Console is telling me that there are 23 Urls marked with a 'noindex' tag, however when i go to view the page source and check the code of these pages, there are no meta robots tags at all - I have also checked the robots.txt file. Also, both Screaming Frog and Deep Crawl tools are failing to pick up these urls so i am a bit of a loss about how to find out whats going on. Inevitably i believe the creative agency who built the site had no idea about general website best practice, and that the dynamic url extensions may have something to do with the no-indexing. Any advice on this would be really appreciated. Are there any other ways of no-indexing pages which the dev / creative team might have implemented by accident? - What am i missing here? Thanks,
Technical SEO | | NickG-1230 -
"Ghost" errors on blog structured data?
Hi, I'm working on a blog which Search Console account advises me about a big bunch of errors on its structured data: Structured data - graphics Structured data - hentry list Structured data - detail But I get to https://developers.google.com/structured-data/testing-tool/ and it tells me "all is ok": Structured data - test Any clue? Thanks in advance, F0NE5lz.png hm7IBtV.png aCRJdJO.jpg 15SRo93.jpg
Technical SEO | | Webicultors0 -
Http://newsite.intercallsystems.com/vista-series/sales@intercallsystems.com
I keep getting crawl errors for urls that have email addresses on the end. I have no idea what these are. Here is an example: the-audio-visual-system/sales@intercallsystems.com Where would these be coming from, how are they created? How can i fix them? When I try to do a 301 redirect it doesn't work. Thanks for your help,
Technical SEO | | renalynd27
Rena0 -
What is the difference between "Referring Pages" and "Total Backlinks" [on Ahrefs]?
I always thought they were essentially the same thing myself but appears there may be a difference? Any one care to help me out? Cheers!
Technical SEO | | Webrevolve0 -
Should I nofollow search results pages
I have a customer site where you can search for products they sell url format is: domainname/search/keywords/ keywords being what the user has searched for. This means the number of pages can be limitless as the client has over 7500 products. or should I simply rel canonical the search page or simply no follow it?
Technical SEO | | spiralsites0 -
How important is meta content="" name="title"?
How much meta content="" name="title" impacts rankings? I have right now:
Technical SEO | | tonis
<title>Keyword</title> So my question is, that does this Keyword 2, so called meta title have any impact on rankings?0 -
Getting Google to index new pages
I have a site, called SiteB that has 200 pages of new, unique content. I made a table of contents (TOC) page on SiteB that points to about 50 pages of SiteB content. I would like to get SiteB's TOC page crawled and indexed by Google, as well as all the pages it points to. I submitted the TOC to Pingler 24 hours ago and from the logs I see the Googlebot visited the TOC page but it did not crawl any of the 50 pages that are linked to from the TOC. I do not have a robots.txt file on SiteB. There are no robot meta tags (nofollow, noindex). There are no 'rel=nofollow' attributes on the links. Why would Google crawl the TOC (when I Pinglered it) but not crawl any of the links on that page? One other fact, and I don't know if this matters, but SiteB lives on a subdomain and the URLs contain numbers, like this: http://subdomain.domain.com/category/34404 Yes, I know that the number part is suboptimal from an SEO point of view. I'm working on that, too. But first wanted to figure out why Google isn't crawling the TOC. The site is new and so hasn't been penalized by Google. Thanks for any ideas...
Technical SEO | | scanlin0 -
How do https pages affect indexing?
Our site involves e-commerce transactions that we want users to be able to complete via javascript popup/overlay boxes. in order to make the credit card form secure, we need the referring page to be secure, so we are considering making the entire site secure so all of our site links wiould be https. (PayPal works this way.) Do you think this will negatively impact whether Google and other search engines are able to index our pages?
Technical SEO | | seozeelot0