Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Why are pages still showing in SERPs, despite being NOINDEXed for months?
-
We have thousands of pages we're trying to have de-indexed in Google for months now. They've all got . But they simply will not go away in the SERPs.
Here is just one example....
http://bitly.com/VutCFiIf you search this URL in Google, you will see that it is indexed, yet it's had for many months. This is just one example for thousands of pages, that will not get de-indexed. Am I missing something here? Does it have to do with using content="none" instead of content="noindex, follow"?
Any help is very much appreciated.
-
Thanks for your reply,
Let me know if you are able to deindex those pages. I will wait. Also please share what you have implemented to deindex those pages.
-
A page can have a link to it, and still not be indexed, so I disagree with you on that.
But thanks for using the domain name. That will teach me to use a URL shortener...
-
Hm, that is interesting. So you're saying that it will get crawled, and thus will eventually become deindexed (as noindex is part of the content="none" directive), but since it's a dead end page, it just takes an extra long time for that particular page to get crawled?
-
Just to add to the other answers, you can also remove the URLs (or entire directory if necessary) via the URL removal tool in Webmaster Tools, although Google prefers you to use it for emergencies of sorts (I've had no problems with it).
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=164734
-
No, nofollow will only tell the bot that the page is a dead end - that the bot should not follow any links on page. And that means any inks from those pages won't be visited by the bot - that is slowing the crawling process overall for those pages.
If you block a page in robots.txt and the page is already in the index - that will remain in the index as the noindex or content=none won't be seen by the bot so it won't be removed from the index - it will just won't be visited anymore.
-
Ok, so, nofollow is stopping the page from being read at all? I thought that nofollow just means the links on the page will not be followed. Is meta nofollow essentially the same as blocking a page in robots.txt?
-
Hi Howard,
The page is in Google index because you are still linking to that page from your website. Here is the page from where that page links:
http://www.2mcctv.com/product_print-productinfoVeiluxVS70CDNRDhtml.html
As you are linking that page Google indexing the page. Google come to know about "noindex" tag before that he has already indexed it. Sorry for bad English.
Lindsay has written awesome post about it here:
http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts
After reading above blog post, my all doubts about noindex, follow, robots.txt get clear.
Thanks Lindsay
-
We always use the noindex code in our robot.txt file.
-
Hi,
In order to deindex you should use noindex as content=none also means nofollow. You do need to follow now in order to reach all other pages and see the no index tag and remove those from the index.
When you have all of them out of the index you can set the none back on.
This is the main reason "none" as attribute is not very wide in usage as "shooting yourself in the foot" with it it's easy.
On the otehr hand you need to see if google bot is actually reaching those pages:
-
see if you don't have any robots.txt restrictions first
-
see when google's bot last have a hit on any of the pages - that will give you a good idea and you can do a prediction.
If those pages are in the sup index you can wait for some time for Google bit to revisit.
One last note: build xml sitemaps with all of those pages and submit those via WMT - that will help at 100% to get those in front of the firing squad and also to be able to monitor those better.
Hope it helps.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content, although page has "noindex"
Hello, I had an issue with some pages being listed as duplicate content in my weekly Moz report. I've since discussed it with my web dev team and we decided to stop the pages from being crawled. The web dev team added this coding to the pages <meta name='robots' content='max-image-preview:large, noindex dofollow' />, but the Moz report is still reporting the pages as duplicate content. Note from the developer "So as far as I can see we've added robots to prevent the issue but maybe there is some subtle change that's needed here. You could check in Google Search Console to see how its seeing this content or you could ask Moz why they are still reporting this and see if we've missed something?" Any help much appreciated!
Technical SEO | Jun 9, 2022, 2:29 PM | rj_dale0 -
.xml sitemap showing in SERP
Our sitemap is showing in Google's SERP. While it's only for very specific queries that don't seem to have much value (it's a healthcare website and when a doctor who isn't with us is search with the brand name so 'John Smith Brand,' it shows if there's a first or last name that matches the query), is there a way to not make the sitemap indexed so it's not showing in the SERP. I've seen the "x-robots-tag: noindex" as a possible option, but before taking any action wanted to see if this was still true and if it would work.
Technical SEO | Nov 11, 2019, 6:24 PM | Kyleroe950 -
Does a no-indexed parent page impact its child pages?
If I have a page* in WordPress that is set as private and is no-indexed with Yoast, will that negatively affect the visibility of other pages that are set as children of that first page? *The context is that I want to organize some of the pages on a business's WordPress site into silos/directories. For example, if the business was a home remodeling company, it'd be convenient to keep all the pages about bathrooms, kitchens, additions, basements, etc. bundled together under a "services" parent page (/services/kitchens/, /services/bathrooms/, etc.). The thing is that the child pages will all be directly accessible from the menus, so there doesn't need to be anything on the parent /services/ page itself. Another such parent page/directory/category might be used to keep different photo gallery pages together (/galleries/kitchen-photos/, /galleries/bathroom-photos/, etc.). So again, would it be safe for pages like /services/kitchens/ and /galleries/addition-photos/ if the /services/ and /galleries/ pages (but not /galleries/* or anything like that) are no-indexed? Thanks!
Technical SEO | Mar 18, 2017, 6:22 PM | BrianAlpert781 -
Indexed pages
Just started a site audit and trying to determine the number of pages on a client site and whether there are more pages being indexed than actually exist. I've used four tools and got four very different answers... Google Search Console: 237 indexed pages Google search using site command: 468 results MOZ site crawl: 1013 unique URLs Screaming Frog: 183 page titles, 187 URIs (note this is a free licence, but should cut off at 500) Can anyone shed any light on why they differ so much? And where lies the truth?
Technical SEO | Oct 30, 2016, 3:12 PM | muzzmoz1 -
Home Page Ranking Instead of Service Pages
Hi everyone! I've noticed that many of our clients have pages addressing specific queries related to specific services on their websites, but that the Home Page is increasingly showing as the "ranking" page. For example, a plastic surgeon we work with has a page specifically talking about his breast augmentation procedure for Miami, FL but instead of THAT page showing in the search results, Google is using his home page. Noticing this across the board. Any insights? Should we still be optimizing these specific service pages? Should I be spending time trying to make sure Google ranks the page specifically addressing that query because it SHOULD perform better? Thanks for the help. Confused SEO :/, Ricky Shockley
Technical SEO | Feb 24, 2016, 5:02 AM | RickyShockley0 -
Is the Authority of Individual Pages Diluted When You Add New Pages?
I was wondering if the authority of individual pages is diluted when you add new pages (in Google's view). Suppose your site had 100 pages and you added 100 new pages (without getting any new links). Would the average authority of the original pages significantly decrease and result in a drop in search traffic to the original pages? Do you worry that adding more pages will hurt pages that were previously published?
Technical SEO | Aug 14, 2013, 8:47 AM | Charlessipe0 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | Jun 2, 2013, 12:00 PM | reidsteven750 -
What's the difference between a category page and a content page
Hello, Little confused on this matter. From a website architectural and content stand point, what is the difference between a category page and a content page? So lets say I was going to build a website around tea. My home page would be about tea. My category pages would be: White Tea, Black Tea, Oolong Team and British Tea correct? ( I Would write content for each of these topics on their respective category pages correct?) Then suppose I wrote articles on organic white tea, white tea recipes, how to brew white team etc...( Are these content pages?) Do I think link FROM my category page ( White Tea) to my ( Content pages ie; Organic White Tea, white tea receipes etc) or do I link from my content page to my category page? I hope this makes sense. Thanks, Bill
Technical SEO | May 22, 2011, 5:03 PM | wparlaman0