Google is indexing bad URLS
-
Hi All,
The site I am working on is built on Wordpress. The plugin Revolution Slider was downloaded. While no longer utilized, it still remained on the site for some time. This plugin began creating hundreds of URLs containing nothing but code on the page. I noticed these URLs were being indexed by Google. The URLs follow the structure: www.mysite.com/wp-content/uploads/revslider/templates/this-part-changes/
I have done the following to prevent these URLs from being created & indexed:
1. Added a directive in my Htaccess to 404 all of these URLs
2. Blocked /wp-content/uploads/revslider/ in my robots.txt
3. Manually de-inedex each URL using the GSC tool
4. Deleted the plugin
However, new URLs still appear in Google's index, despite being blocked by robots.txt and resolving to a 404. Can anyone suggest any next steps? I
Thanks!
-
All of the plugins I can find allow the tag to be deployed on pages, posts etc. You pick from a pre-defined list of existing content, instead of just whacking in a URL and having it inserted (annoying!)
If you put an index.php at that location (the location of the 404), you could put whatever you wanted in it. Might work (maybe test with one). Would resolve a 200 so you'd then need to force a 410 over the top. Not very scalable though...
-
I do agree, I may have to pass this off to someone with more backend experience than myself. In terms of plugins, are you aware of any that will allow you to add noindex tags to an entire folder?
Thanks!
-
Hmm, that's interesting - it should work just as you say! This is the point where you need a developer's help rather than an SEO analysts :') sorry!
Google will revisit 410s if it believes there is a legitimate reason to do so, but it's much less likely to revisit them than it is with 404s (which actively tell Google that the content will return).
Plugins are your friends. Too many will overload a site and make it run pretty slowly (especially as PHP has no multi-threading support!) - but this plugin, you would only need it temporarily anyway.
You might have to start using something like PHPMyAdmin to browse your SQL databases. It's possible that the uninstall didn't work properly and there are still databases at work, generating fresh URLs. You can quash them at the database level if required, however I'd say go to a web developer as manual DB edits can be pretty hazardous to a non-expert
-
Thank you for all your help. I added in a directive to 410 the pages in my htaccess as so: Redirect 410 /revslider*/. However, it does not seem to work.
Currently, I am using Options All -Indexes to 404 the URLs. Although I still remain worried as even though Google would not revisit a 410, could it still initially index it? This seems to be the case with my 404 pages - Google is actively indexing the new 404 pages that the broken plugin is producing.
As I can not seem to locate the directory in Cpanel, adding a noindex to them has been tough. I will look for a plugin that can dynamically add it based on folder structure because the URLs are still actively being created.
The ongoing creation of the URL's is the ultimate source of the issue, I expected that deleting the plugin would have resolved it but that does not seem to be the case.
-
Just remember, the only regex character which is supported is "*". Others like "" and "?" are not supported! So it's still very limited. Changing the response from 404 to 410 should really help, but be prepared to give Google a week or two to digest your changes
Yes, it would be tricky to inject those URLs with Meta no index tags, but it wouldn't be impossible. You could create an index.php file at the directory of each page which contained a Meta no-index directive, or use a plugin to inject the tag onto specific URLs. There will be ways, don't give up too early! That being said, this part probably won't add much more than the 410s will
It wouldn't be a bad idea to inject the no-index tags, but do it for 410s and not for 404s (doing it for 404s could cause you BIG problems further down the line). Remember, 404 - "temporarily gone but will come back", 410 - "gone - never coming back". Really all 410s should be served with no-index tags. Google can read dynamically generated content, but is less likely to do so and crawls it less often. Still - it would at least make the problem begin shrinking over time. It would be better to get the tags into to non-modified source code (server side rendering)
By the way, you can send a no-index directive in the HTTP header if you are really stuck!
https://sitebulb.com/hints/indexability/robots-hints/noindex-in-html-and-http-header/
The above post is quite helpful, it shows no-index directives in HTML but also in the HTTP header
In contrast to that example, you'd be serving 410 (gone) not 200 (ok)
-
Thank you for your response! I will certainly use the regex in my robots.txt and try to change my Htaccess directive to 410 the pages.
However, the issue is that a defunct plugin is randomly creating hundreds of these URL's without my knowledge, which I can not seem to access. As this is the case, I can't add a no-index tag to them.
This is why I manually de-indexed each page using the GSC removal tool and then blocked them in my robots.txt. My hope was that after doing so, Google would no longer be able to find the bad URL's.
Despite this, Google is still actively crawling & indexing new URL's following this path, even though they are blocked by my robots.txt (validated). I am unsure how these URL's even continue to be created as I deleted the plugin.
I had the idea to try to write a program with javascript that would take the status code and insert a no-index tag if the header returned a 404, but I don't believe this would even be recognized by Google, as it would be inserted dynamically. Ultimately, I would like to find a way to get the plugin to stop creating these URL's, this way I can simply manually de-index them again.
Thanks,
-
You have taken some good measures there, but it does take Google time to revisit URLs and re-index them (or remove them from the index!)
Did you know, 404 just means a URL was temporarily removed and will be coming back? The status code you are looking to serve is 410 (gone) which is a harder signal
Robots.txt (for Google) does in-fact support wild cards. It's not full regex, in-fact the only wildcard supported is "*" (asterisk: matching any character or string of characters). You could supplement with a rule like this:
User-agent: * Disallow: /*revslider* That should, theoretically block any URL from indexation if it contains the string "revslider" Be sure to **validate** any new robots.txt rules using Google Search Console to check they are working right! Remember that robots.txt affects crawling and **not indexation!** To give Google a directive not to index a URL, you should use the Meta no-index tag: [https://support.google.com/webmasters/answer/93710?hl=en](https://support.google.com/webmasters/answer/93710?hl=en) **The steps are:**
- Remove your existing robots.txt rule (which would stop Google crawling the URL and thus stop them seeing a Meta no-index tag or any change in status code)
- Apply status 410 to those pages instead of 404
- Apply Meta no-index tags to the 410'ing URLs
- Wait for Google to digest and remove the pages from its index
- Put your robots.txt rule back to prevent it ever happening again
- Supplement with an additional wildcard rule
- Done!
- Hope that helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Canonical Url Structure Vs. Google Search View
I recently set up a new site and set the "preferred" domain in Google Webmasters to show URLs WITHOUT the WWW for google search purposes. In the confirmation email from google, this confused me: "This setting defines which host - www or not - should be considered the canonical host when indexing your site." In the website, we have cononical URLS at the top of every page in the header, but still have the WWW in those. Any issues with that?
Technical SEO | | vikasnwu0 -
Webpages & Images Index Graph Gone Down Badly in Google Search Console Why?
Hello All, What is going on with Sitemap Index Status in Google Search Console :- Webpages Submitted - 35000 index showing 21000 whereas previously approx 34500 were index. Images Submitted - 85000 index showing - 11000 whereas previously approx 80000 were index. Whereas when I search in google site:abcd.com is it showing approx 27000 index for webpages. No message from google for penalty or warning etc.Please help.
Technical SEO | | wright3350 -
Why seomoz.org still in Google index?
I searched in Google, the number of URLs indexed left in the seomoz.org domain since it changed to moz.comI am surprised that after all this time more than 15,000 URLs indexed:https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=site%3Aseomoz.org%20inurl%3Aseomoz.org If I clicked on any of the results it will be redirect (301) to the new domain, so it is working, but Google still keep these URLs in the index.
Technical SEO | | Yosef
What could be the reason?Will not cause duplicated content issue on moz.com?0 -
Google is indexing blocked content in robots.txt
Hi,Google is indexing some URLs that i don't want to be indexed and also is indexing the same URLs with https. This URLs are blocked in the file robots.txt.I've tried to block this URLs through Google WebmasterTools but Google doesn't let me do it because this URL are httpsThe file robots.txt is correct so, what can i do to avoid this content to be indexed?
Technical SEO | | elisainteractive0 -
Missing files in Google and Bing Index
We uploaded our sitemap a while back and we are no longer see around 8 out of 33 pages. We try submitting the sitemap again about 1-2 weeks ago and there but no additional pages are seen when I do site: option in both search engines. I reviewed the sitemap and it includes all the pages. I am not seeing any errors in the seo moz for these pages. Any ideas what I should try?
Technical SEO | | EZSchoolApps0 -
Google indexing less url's then containded in my sitemap.xml
My sitemap.xml contains 3821 urls but Google (webmaster tools) indexes only 1544 urls. What may be the cause? There is no technical problem. Why does Google index less URLs then contained in my sitemap.xml?
Technical SEO | | Juist0 -
Url's don't want to show up in google. Please help?
Hi Mozfans 🙂 I'm doing a sitescan for a new client. http://www.vacatures.tuinbouw.nl/ It's a dutch jobsite. Now the problem is here: The url http://www.vacatures.tuinbouw.nl/vacatures/ is in google.
Technical SEO | | MaartenvandenBos
On the same page there are jobs (scroll down) with a followed link.
To a url like this: http://www.vacatures.tuinbouw.nl/vacatures/722/productie+medewerker+paprika+teelt/ The problem is that the second url don't show up in google. When i try to make a sitemap with Gsitecrawler the second url isn't in de sitemap.. :S What am i doing wrong? Thanks!0 -
Why Google did not index our domain?
Hi, We launched tmart 60 days ago and submitted to google, bing, yahoo 20 days later. But google had never indexed our website still when yahoo indexed it in one week. What we have checked or tried: 1. We got 20~50 inlinks in one month and now 81 inlinks via yahoo site explorer. 2. This domain has registered for 13 years and we purchased it from sedo last year. We
Technical SEO | | zt673
did not find any problems from domain archive pages. 3. Page similar: the homepage is 50% similar to one of our competitors when we just launched.
So we adjusted the page structure and modified the content one month later and decreased the similarity to 30% (by tools from webconfs.com) 4. Google Robots: googlebot crawled our website every day after we submitted for indexing.
We opened GWT account for it and added the xml sitemap last week. GWT said nothing
was wrong except the time of page loading. Our questions: Why google did not indexed our website? What should we do? Thanks, wu0