Send noindex, noarchive with 410?
-
My classifieds site returns a 410 along with an X-Robots-Tag HTTP header set to "noindex,noarchive" for vehicles that are no longer for sale. Google, however, apparently refuses to drop these vehicles from their index (at least as reported in GWT). By returning a "noindex,noarchive" directive, am I effectively telling the bots "yeah, this is a 410 but don't record the fact that this is a 410", thus effectively canceling out the intended effect of the 410?
-
That sounds good, let me know if you have further questions, I'm always glad to be of help!
-
Thanks for the info, mememax. I don't relish the thought of using the removal tool, but I suppose I can actually 301-redirect many of those 410s to category pages and then use the GWT for the rest.
-
hey Tony you made it in the right way, you added the error code + the noindex. However google won't drop your page from the index until it crawls it several times.
You can do this: first of all be sure that you have no links pointing to that page then:
- see in GWT if the page is showing as a 404 and when it will disappear from GWTools errors
- or go to GWT and ask google to remove it from the index. This is the fastest way, and google asks you to add a noindex or return a 404 to make this action, so actually you're more than fine to do that, however it depends on the volume of 404s you have this may be a huge and repetitive task to do.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I nofollow/noindex the outgoing links in a news aggregator website?
We have a news aggregator site that has 2 types of pages: First Type:
Technical SEO | | undaranfahujakia
Category pages like economic, sports or political news and we intend to do SEO on these category pages to get organic traffic. These pages have pagination and show the latest and most viewed news on the corresponding category. Second Type:
News headlines from other sites are displayed on the category pages. The user will be directed to that news page on the main site by clicking on a link. These links are outgoing links and we redirect them by JavaScript (not 301).
In fact these are our websites articles that just have titles (linked to destination) and meta descriptions (reads from news RSS). Question:
Should we have to nofollow/noindex the second type of links? In fact, since the crawl budget of websites is limited, isn't it better to spend this budget on the pages we have invested in (first type)?0 -
How to fix Submitted URL marked ‘noindex’
Hi I recently discovered Google has stopped crawling/indexing my post.
Technical SEO | | Favplug
So i had to check my Search console then i saw this Coverage issues saying “Submitted URL marked ‘noindex’”. And anytime I tried Requesting Indexing For the affected pages, Its tells me “Indexing request rejected”. Here is my site URL: http://bit.ly/2kfqTEv Here is one of the affected pages http://bit.ly/39aMenJ0 -
Include or exclude noindex urls in sitemap?
We just added tags to our pages with thin content. Should we include or exclude those urls from our sitemap.xml file? I've read conflicting recommendations.
Technical SEO | | vcj0 -
301 or 302 or leave at 410
I have a client who manages vacation rental properties and those properties get links. If an owner pulls their property off the rental market the current status given is a 410 which I instinctively want turned into a 301. The problem is, often those properties come back online with the same URL so the question is, when a 301 is turned into a 200 - has anyone noticed a significant delay in time for that page to rank?I know technically it should probably be a 410 or maybe a 302 but ... you know ... the link weight. 🙂
Technical SEO | | BeanstalkIM1 -
Timely use of robots.txt and meta noindex
Hi, I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents. When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good. When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages. When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag. Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this. I need a clear solution, which solves both issues (index and crawling). What I try to do now, is the following: I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory. Can this work the way I imagine, or do you have a better way of doing so? Thank you in advance for all your help.
Technical SEO | | Dilbak0 -
Index or Noindex Wordpress Categories?
I've read a few different opinions on this, but I'm still unclear as to the best practice. I use my categories more like tags. Let's say I write a post about about seo, local marketing, and indexing. I would use the categories "seo"+"marketing"+"indexing". Therefore, that same post will show up in all three category pages. If these category pages are all set to be indexed, what impact does that have on my post being indexed? Should I noindex all of the categories except for the main ones to avoid too much duplicate content? Or do you recommend noindexing all of the categories? I know some seo plugins make this easy to do (I'm using Yoast). The only reason I'm hesitant to noindex all categories is because some of them rank well for their subject. I also already tried noindexing about a month ago and lost a lot of blog traffic, so I reversed it. Now some of my category pages have overtaken my post rankings, which makes it harder for the reader to find the content, but my overall blog traffic is back up. With my situation, what is the best thing to do long term? I just started using my blog a lot more so I want to know that I have it setup correctly. Thanks in advance!
Technical SEO | | ChaseH0 -
NoIndex/NoFollow pages showing up when doing a Google search using "Site:" parameter
We recently launched a beta version of our new website in a subdomain of our existing site. The existing site is www.fonts.com with the beta living at new.fonts.com. We do not want Google to crawl the new site until it's out of beta so we have added the following on all pages: However, one of our team members noticed that google is displaying results from new.fonts.com when doing an "site:new.fonts.com" search (see attached screenshot). Is it possible that Google is indexing the content despite the noindex, nofollow tags? We have double checked the syntax and it seems correct except the trailing "/". I know Google still crawls noindexed pages, however, the fact that they're showing up in search results using the site search syntax is unsettling. Any thoughts would be appreciated! DyWRP.png
Technical SEO | | ChrisRoberts-MTI0 -
Mask links with JS that point to noindex'ed paged
Hi, in an effort to prepare our page for the Panda we dramatically reduced the number of pages that can be indexed (from 100k down to 4k). All the remaining pages are being equipped with unique and valuable content. We still have the other pages around, since they represent searches with filter combination which we deem are less interesting to the majority of users (hence they are not indexed). So I am wondering if we should mask links to these non-indexed pages with JS, such that Link-Juice doesn't get lost to those. Currently the targeted pages are non-index via "noindex, follow" - we might de-index them with robots.txt though, if the "site:" query doesn't show improvements. Thanks, Sebastian
Technical SEO | | derderko0