Send noindex, noarchive with 410?
-
My classifieds site returns a 410 along with an X-Robots-Tag HTTP header set to "noindex,noarchive" for vehicles that are no longer for sale. Google, however, apparently refuses to drop these vehicles from their index (at least as reported in GWT). By returning a "noindex,noarchive" directive, am I effectively telling the bots "yeah, this is a 410 but don't record the fact that this is a 410", thus effectively canceling out the intended effect of the 410?
-
That sounds good, let me know if you have further questions, I'm always glad to be of help!
-
Thanks for the info, mememax. I don't relish the thought of using the removal tool, but I suppose I can actually 301-redirect many of those 410s to category pages and then use the GWT for the rest.
-
hey Tony you made it in the right way, you added the error code + the noindex. However google won't drop your page from the index until it crawls it several times.
You can do this: first of all be sure that you have no links pointing to that page then:
- see in GWT if the page is showing as a 404 and when it will disappear from GWTools errors
- or go to GWT and ask google to remove it from the index. This is the fastest way, and google asks you to add a noindex or return a 404 to make this action, so actually you're more than fine to do that, however it depends on the volume of 404s you have this may be a huge and repetitive task to do.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I nofollow/noindex the outgoing links in a news aggregator website?
We have a news aggregator site that has 2 types of pages: First Type:
Technical SEO | | undaranfahujakia
Category pages like economic, sports or political news and we intend to do SEO on these category pages to get organic traffic. These pages have pagination and show the latest and most viewed news on the corresponding category. Second Type:
News headlines from other sites are displayed on the category pages. The user will be directed to that news page on the main site by clicking on a link. These links are outgoing links and we redirect them by JavaScript (not 301).
In fact these are our websites articles that just have titles (linked to destination) and meta descriptions (reads from news RSS). Question:
Should we have to nofollow/noindex the second type of links? In fact, since the crawl budget of websites is limited, isn't it better to spend this budget on the pages we have invested in (first type)?0 -
Sitemap & noindex inconstancy?
Hey Moz Community! On a the CMS in question the sitemap and robots file is locked down. Can't be edited or modified what so ever. If I noindex a page in the But it is still on the xml sitemap... Will it get indexed? Thoughts, comments and experience greatly appreciate and welcome.
Technical SEO | | paul-bold0 -
Noindex Success?
Has anyone had success implementing noindex/follow to pages from their site which has been hit by a Panda penalty? Our site has a lot of duplicate content for products descriptions that we had permission to use from our distributor (who is also online). We went ahead and noindex/follow those pages in the hopes that google will focus on the products that we carry that do have original descriptions (about 1/3 of our products). We didn't want to just remove those products since they are actually beneficial to our customers. Most of the duplication of content is in the form of ingredients lists.
Technical SEO | | dustyabe0 -
Timely use of robots.txt and meta noindex
Hi, I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents. When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good. When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages. When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag. Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this. I need a clear solution, which solves both issues (index and crawling). What I try to do now, is the following: I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory. Can this work the way I imagine, or do you have a better way of doing so? Thank you in advance for all your help.
Technical SEO | | Dilbak0 -
Index or Noindex Wordpress Categories?
I've read a few different opinions on this, but I'm still unclear as to the best practice. I use my categories more like tags. Let's say I write a post about about seo, local marketing, and indexing. I would use the categories "seo"+"marketing"+"indexing". Therefore, that same post will show up in all three category pages. If these category pages are all set to be indexed, what impact does that have on my post being indexed? Should I noindex all of the categories except for the main ones to avoid too much duplicate content? Or do you recommend noindexing all of the categories? I know some seo plugins make this easy to do (I'm using Yoast). The only reason I'm hesitant to noindex all categories is because some of them rank well for their subject. I also already tried noindexing about a month ago and lost a lot of blog traffic, so I reversed it. Now some of my category pages have overtaken my post rankings, which makes it harder for the reader to find the content, but my overall blog traffic is back up. With my situation, what is the best thing to do long term? I just started using my blog a lot more so I want to know that I have it setup correctly. Thanks in advance!
Technical SEO | | ChaseH0 -
Thoughts about stub pages - 200 & noindex ok, or 404?
With large database/template driven websites it is often possible to get a lot of pages with no content on them. What are the current thoughts regarding these pages with no content, options; Return a 200 header code with noindex meta tag Return a 404 page & header code Something else? Thanks
Technical SEO | | slingshot0 -
We are still seeing duplicate content on SEOmoz even though we have marked those pages as "noindex, follow." Any ideas why?
We have many pages on our website that have been set to "no index, follow." However, SEOmoz is indexing them as duplicate content. Why is that?
Technical SEO | | cmaseattle0 -
I have pages that are showing up as having too many links, yet they are noindexed.
I've got several pages that have "too many on page links" and the pages mentioned have already been noindexed. Do these pages need to be no followed too? Here's one of the pages: http://digisavvy.com/site-map/. There's several pages like this, most of which are category or tag archives, which I've noindexed... Do I need to nofollow these, too?
Technical SEO | | digisavvy0