Mass Removal Request from Google Index
-
Hi,
I am trying to cleanse a news website. When this website was first made, the people that set it up copied all kinds of articles they had as a newspaper, including tests, internal communication, and drafts. This site has lots of junk, but this kind of junk was on the initial backup, aka before 1st-June-2012. So, removing all mixed content prior to that date, we can have pure articles starting June 1st, 2012!
Therefore
- My dynamic sitemap now contains only articles with release date between 1st-June-2012 and now
- Any article that has release date prior to 1st-June-2012 returns a custom 404 page with "noindex" metatag, instead of the actual content of the article.
The question is how I can remove from the google index all this junk as fast as possible that is not on the site anymore, but still appears in google results?
I know that for individual URLs I need to request removal from this link
https://www.google.com/webmasters/tools/removalsThe problem is doing this in bulk, as there are tens of thousands of URLs I want to remove. Should I put the articles back to the sitemap so the search engines crawl the sitemap and see all the 404? I believe this is very wrong. As far as I know this will cause problems because search engines will try to access non existent content that is declared as existent by the sitemap, and return errors on the webmasters tools.
Should I submit a DELETED ITEMS SITEMAP using the <expires>tag? I think this is for custom search engines only, and not for the generic google search engine.
https://developers.google.com/custom-search/docs/indexing#on-demand-indexing</expires>The site unfortunatelly doesn't use any kind of "folder" hierarchy in its URLs, but instead the ugly GET params, and a kind of folder based pattern is impossible since all articles (removed junk and actual articles) are of the form:
http://www.example.com/docid=123456So, how can I bulk remove from the google index all the junk... relatively fast?
-
Hi Ioannis,
What about the first suggestion? Can you create a page linking to all of the pages that you'd like to remove, then have Google crawl that page?
Best,
Kristina
-
Thank you Kristina,
I know about the URL structure, I have been trying the past few months to cleanse this site that I was not involved in its creation. It has several more SEO problems that have either been fixed or not yet, but we are talking about more than 50 SEO problems I've found so far - most of these critical.
On the sitemap that I built, the junk pages do not exist, and because this is sitemap I have written myself, I can easily make another containing the articles that I have removed (just reverse a part of my select query for the sitemap to get the ones I have removed).
http://www.neakriti.gr/webservices/sitemap-index.aspx
So far I implemented the last of your suggestions and here is an example:
This is a valid article page
http://www.neakriti.gr/?page=newsdetail&DocID=1314221 - (Status Code: 200)This is a non existent article page (never existed at the first place) - (Status Code: 404)
http://www.neakriti.gr/?page=newsdetail&DocID=12345678This is one of the articles that I removed from sitemap and site - (Status Code: 410)
http://www.neakriti.gr/?page=newsdetail&DocID=894052Also I would like you to take a look at another question about the same site and see that it can relate to this question with garbage articles too...
https://moz.com/community/q/multiple-instances-of-the-same-articleThank you so much!
-
Hi Ioannis,
You're in quite a bind here, without a good URL structure! I don't think there's any one perfect option, but I think all of these will work:
- Create a page on your site that links to every article you would like to delete, keeping those articles 404/410ed. Then, use the Fetch as Googlebot tool, and ask Google to crawl the page plus all of its links. This will get Google to quickly crawl all of those pages, see that they're gone, and remove them from their index. Keep in mind that if you just use a 404, Google may keep the page around for a bit to make sure you didn't just mess up. As Eric said, a 410 is more of a sure thing.
- Create an XML sitemap of those deleted articles, and have Google crawl it. Yes, this will create errors in GSC, but errors in GSC mean that they're concerned you've made a mistake, not that they're necessarily penalizing you. Just mark those guys as fixed and take the sitemap down once Google's crawled it.
- 410 these pages, remove all internal links to them (use a tool like Screaming Frog to make sure you didn't miss any links!), and remove them from your sitemap. That'll distance you from that old, crappy content, and Google will slowly realize that it's been removed as it checks in on its old pages. This is probably the least satisfying option, but it's an option that'll get the job done eventually.
Hope this helps! Let us know what you decide to do.
Best,
Kristina
-
Thank you,
so you suggest that based on my date based query, instead of blocking everything before that date blindly, keep blocking it with 410, while anything that doesn't exist anyway return 404.
Also another question, about the blocked articles that return 410, should I put their URLs back on the xml sitemap or not?
-
Any article that has release date prior to 1st-June-2012 should return a custom 410 page with "noindex" metatag, instead of the actual content of the article.
The error returned should be a "410 gone" and not just a 404. That way Google will treat it differently, and may remove it from the index faster than just returning a 404. Also, you can use the Google removal tool, as well. Don't forget the robots.txt file, as well, there may be directories with the content that you need to disallow.
But overall, using a 410 is going to be better and most likely faster.
-
Thank you for your response.
I defenintelly cannot use noindex because as I explained I changed all articles prior to the minimum given date to return 404. So this content is not visibly available on the web in order to contain a noindex directive. Unless you mean to have it at my custom 404 page, where yes its there.
Also there is no folder to associate in robots, since they are in ugly form of GET params like DOCID=12345. So given that, there are thousands of DocIDs that are junk and removed, and thousands that are the actuall articles.
So I assumed that creating a "deleted articles" sitemap where each <url>will contain an <expires>2016-06-01</expires> tag seemed the most logical thing, but I am afraid its for "custom search engines", rather than for normal de-index requests as its provided bellow</url>
https://developers.google.com/custom-search/docs/indexing#on-demand-indexing
-
Sitemaps is definitely not the way to go for this as you can't just have an expires tag in there and it would make pages go away. The best option to go with is the meta robots and then put them either on nonindex, nofollow, or noindex, follow. With this approach and hopefully with a relative high crawl rate you can make sure that the data from these pages will be removed from the Google Index as soon as possible.
If you still want these pages to be indexed but maybe just not have them crawled anymore, which I don't think you'd like to do based on your explanation then go with robots.txt and excluding the pages in there that you'd like to.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Indexing Of Pages As HTTPS vs HTTP
We recently updated our site to be mobile optimized. As part of the update, we had also planned on adding SSL security to the site. However, we use an iframe on a lot of our site pages from a third party vendor for real estate listings and that iframe was not SSL friendly and the vendor does not have that solution yet. So, those iframes weren't displaying the content. As a result, we had to shift gears and go back to just being http and not the new https that we were hoping for. However, google seems to have indexed a lot of our pages as https and gives a security error to any visitors. The new site was launched about a week ago and there was code in the htaccess file that was pushing to www and https. I have fixed the htaccess file to no longer have https. My questions is will google "reindex" the site once it recognizes the new htaccess commands in the next couple weeks?
Intermediate & Advanced SEO | | vikasnwu1 -
Google disavow file
Does anybody have any idea how often Google reads the disavow file?
Intermediate & Advanced SEO | | seoman100 -
Google Update
My rank has dropped quite a lot this past week and I can see from the Moz tools that there is an unconfirmed Google update responsible. Is there any information from Moz on this?
Intermediate & Advanced SEO | | moon-boots0 -
How do we decide which pages to index/de-index? Help for a 250k page site
At Siftery (siftery.com) we have about 250k pages, most of them reflected in our sitemap. Though after submitting a sitemap we started seeing an increase in the number of pages Google indexed, in the past few weeks progress has slowed to a crawl at about 80k pages, and in fact has been coming down very marginally. Due to the nature of the site, a lot of the pages on the site likely look very similar to search engines. We've also broken down our sitemap into an index, so we know that most of the indexation problems are coming from a particular type of page (company profiles). Given these facts below, what do you recommend we do? Should we de-index all of the pages that are not being picked up by the Google index (and are therefore likely seen as low quality)? There seems to be a school of thought that de-indexing "thin" pages improves the ranking potential of the indexed pages. We have plans for enriching and differentiating the pages that are being picked up as thin (Moz itself picks them up as 'duplicate' pages even though they're not. Thanks for sharing your thoughts and experiences!
Intermediate & Advanced SEO | | ggiaco-siftery0 -
Google indexing pages from chrome history ?
We have pages that are not linked from site yet they are indexed in Google. It could be possible if Google got these pages from browser. Does Google takes data from chrome?
Intermediate & Advanced SEO | | vivekrathore0 -
Google Indexing of Images
Our site is experiencing an issue with indexation of images. The site is real estate oriented. It has 238 listings with about 1190 images. The site submits two version (different sizes) of each image to Google, so there are about 2,400 images. Only several hundred are indexed. Can adding Microdata improve the indexation of the images? Our site map is submitting images that are on no-index listing pages to Google. As a result more than 2000 images have been submitted but only a few hundred have been indexed. How should the site map deal with images that reside on no-index pages? Do images that are part of pages that are set up as "no-index" need a special "no-index" label or special treatment? My concern is that so many images that not indexed could be a red flag showing poor quality content to Google. Is it worth investing in correcting this issue, or will correcting it result in little to no improvement in SEO? Thanks, Alan
Intermediate & Advanced SEO | | Kingalan10 -
Complicated Question: Removing Spam Backlinks that were Not Requested
I'm new and seeking help with the following scenario: 1. Main site: is a domain.com established authority type site 2. Second site: is a domain.org (has robots.txt to no index) but someone obviously not site owner has done negative seo campaign against the .org domain and built spammy links to it. In fact, that's all that exist on this second domain because it is used for development purposes only right now.) No one would link to this one normally as it is just secondary domain used to protect trademark and for development use.) When searching for it by domain name it does not appear on first page for search results. Checking link profile the only links that show for domain.org are spam links. Have contacted site/s where spam links were placed (no answer) Main site domain.com and domain.org have same whois and hosted on the same server as they are owned by same company Main site domain.com still appears first for its name but has lost some rankings. I am working to fix some technical issues ie: duplicate urls with CMS etc, but would like to find out what to do about the domain.org content that clearly has had someone target it with spammy non requested backlinks.) domain.com has Google webmaster tools account, no messages about unnatural liking in those reports 1. I'm not sure I should add domain.org to GWT to see if there is an unnatural link penalty applied or if this would further connect the two domains through association. If I could get some feedback/suggestions on what my options are with regards to making sure that the domain.org domain has a clean profile that would be most appreciated. Also because site owner has would like to begin using domain.org in the future for some unique content, but as it stands right now cannot because domain has been targed by poor backlinks. Anyone else run into situation where the .org or .net versions were targeted by spammy backlinks even though the domains were not actively used? What's the safest way to proceed? a) Concerned about possible co-penalty between main site domain.com and domain.org b) how to remove problems issues with domain.org so that owner can use it in future. Many thanks for your thoughts and help with this one. I appreciate any help or feedback.
Intermediate & Advanced SEO | | web0230 -
Have we suffered a Google penalty?
Hello, In January, we started a new blog to supplement our core ecommerce website. The URL of the website is www.footballshirtblog.co.uk and the idea behind it was that we would write articles related to our industry to build a community which would ultimately boost our sales. We would add several posts per day, a mix between shorter news stories of around 150 words and more detailed content pages of around 500 words. Everything was going well, we were making slow but sure progress on the main generic keywords but were receiving several thousand visitors a day, mostly finding the posts themselves on Google. The surge on traffic meant we needed to move server, which we did around 6 weeks ago. When we did this, we had a few teething problems with file permissions, etc, which meant we were tempoarily able to add new posts. As our developers were tied up with other issues, this continued for a 7-10 day period, with no new content being added. In this period, the site completely dropped from Google, losing all it's rankings and traffic, to the extent it now doesn't even rank for it's own name. This is very frustrating as we have put a huge amount of work and content into developing this site. We have added a few posts since, but not a huge amount as it is frustrating to do it with no return and the concern that the site has been banned forever. I cannot think of any logical reason why this penalty has occured as we haven't been link spamming, etc. Does anyone have any feedback or suggestions as to how we can get back on track? Regards,
Intermediate & Advanced SEO | | ukss1984
David0