Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Can too many "noindex" pages compared to "index" pages be a problem?
-
Hello,
I have a question for you: our website virtualsheetmusic.com includes thousands of product pages, and due to Panda penalties in the past, we have no-indexed most of the product pages hoping in a sort of recovery (not yet seen though!). So, currently we have about 4,000 "index" page compared to about 80,000 "noindex" pages.
Now, we plan to add additional 100,000 new product pages from a new publisher to offer our customers more music choice, and these new pages will still be marked as "noindex, follow".
At the end of the integration process, we will end up having something like 180,000 "noindex, follow" pages compared to about 4,000 "index, follow" pages.
Here is my question: can this huge discrepancy between 180,000 "noindex" pages and 4,000 "index" pages be a problem? Can this kind of scenario have or cause any negative effect on our current natural SEs profile? or is this something that doesn't actually matter?
Any thoughts on this issue are very welcome.
Thank you!
Fabrizio
-
Julian, we sell digital sheet music and the additional 100,000 are products from Alfred music publishing company. Of course they will not be "high quality pages", but they are product pages, each one offering a piece of music. We are an e-commerce website, how can we avoid having product pages?! But of course, as Wesley said above, we can improve each product page quality content by giving more/custom information for each product, increasing user reviews, etc.
Other suggestions?
-
Thank you Wesley, yes, I think you are right. Our business is suffering really too much without traffic coming from the "noindex" pages, and after many months we still don't see recovery. I think the best approach would be probably to keep the pages in the index and differentiate them as much as we can.
Thank you!
-
Panda is probably the worst penalty to have. Very few site ever recover, even though site owner have spent a lot of time, effort and money trying to solve it. e.g. http://searchengineland.com/google-panda-two-years-later-losers-still-losing-one-real-recovery-149491
In this video, about 12.43 - matt cutts is clear, if you think its low quality 404 it, in other delete it.
May I ask why you want to keep these 180,000 pages live? And why are you planning to add another 100,000 pages? Surely they cant be high quality pages?
-
Fabrizo, as far as I know Google Panda is now part of the standard Google algorithm and it won't be a periodic event anymore. Penguin still is though.
If your product pages are duplicate content according to Google try and see if you can do something about that instead of no-indexing it. Is there no way you can update the products so they display a more prominent description? I understand that manually it's not a possibility because there are way too much products for that to be an option.
I did notice that on a lot of your product pages you have a standard text: "This item includes: PDF (digital sheet music to print), Scorch files (for online playing, transposition and printing), Videos, MIDI and Mp3 audio files (including <a title="This item includes Mp3 music accompaniment files.">Mp3 music accompaniment files</a>)*
Genre: classical
Skill Level: medium"Since this is basicly the only text on a lot of pages I think it's a big part of the problem. Maybe you can change this text so it looks different for every product?
Try tools like http://www.plagspotter.com/ to find the duplicate content and see which solution is best for your specific problem.
I hope i helped and if you need more help let me know
-
I understand what you mean and I agree with you in general, but specifically to our own website, I have no idea who put that link on that page, which is by the way a "nofollow" link. We never built links, all our incoming links are either natural and/or links from our own affiliates. I don't see much of "that stuff" on our back-link profile... am I in error?
Anyhow, yes, we are aware the situation is quite complex. Thank you again.
-
I actually looked at the competitors ranking #3 and #4 for the phrase "download sheet music" since your ranking 5th. Either way, its not a matter of too much or too little. It's how much of the link profile is authentic vs how much is made up of stuff like this....
http://www.dionneco.com/2011/02/love-is-a-parallax/
that's what I meant by fake links.
I think what you may be missing is how complex the situation really is. There's a lot more to be considered than a number in Open Site Explorer - which is actually only a portions of what's really out there.
You may also want to look at changes you can make on-site. I'm a firm believer that proper HTML, accessibility, UX and all that really matter.
-
Thank you Takeshi, I think you got the problem right. The "crawling" side of the issue is something I was thinking about too!
We are actually working on every aspect of our website to improve its content because we have suffered by Panda a lot in the past two years, so here is the strategy we begun to take since March:
1. "noindexing" most of our thin or almost-duplicate content to get it removed from the index
2. Improve our best content and differentiate it as much as we can with compelling content (this takes a long time!)
3. Consolidating similar pages with the use of canonical tags.
In order to tackle the "slower crawling" problem you have highlighted here, do you think that would be probably better for us to stop engines to crawl those pages altogether via robots.txt once they have been removed? Would that solve the crawl issue? I could do that at least with these new 100,000 new product pages we plan to add!
Thank you!
-
Wesley, that's because of being penalized by Panda several times in the past... so we are trying the "clean-up" strategy with the hope to be "de-penalized" by Panda at the next related algorithm update. Looks like we had too many "thin" or "almost duplicate" pages... that's why we removed so many pages from the index! But if we don't see improvements in the coming 1-2 months, I guess we'll put the product pages in the index because our business is suffering a big deal!
-
Colin, what do you mean with "fake links" exactly? Our link profile looks actually in better shape than our main competitors:
virtualsheetmusic.com (our site): links: 614,013 root domains: 2,233
sheetmusicplus.com (competitor): links: 5,322,596 root domains: 6,149 (worse than our profile!)
musicnotes.com (competitor): links: 6,527,429 root domains: 2,914 (much worse than our profile!)
Am I missing anything?
-
The discrepancy between noindexed/indexed pages is not in itself a problem. However having all those pages will present a challenge to Google, in terms of crawling. Even though the pages won't be indexed, Google will need to spend some of your limited crawl budget crawling all those pages.
Also, to recover from Panda it's necessary to not only noindex duplicate content, but improve your indexed content. That means things like consolidating similar pages into one page, writing unique content for your pages, and getting unique user-generated content such as reviews.
-
Why would you want to no-index your product pages? They seem like the kind of pages you want to get found on.
There shouldn't be a problem between the amount of index pages VS no-index pages except you won't get found on the no-index ones. Product pages tend to be the kind of pages that you REALLY want to get found on.
I think you should rethink your strategy to recover from the penalties.
Try to find out where exactly the penalties came from and fix the errors in that area of our website. -
Can't say I've been in that situation, but search engines seem to interpret that tag as an on/off situation. and I think you probably know that your problems aren't related to or able to be solved by robots meta tags.
You need less fake links. OSE finds well over half a million links from 3K root domains to your site. Look at your competitors - a few thousand links from a handful of domains.
It's a shame because it seems like the internet wanted to make you the authority naturally - You've got a handful of really solid links coming in. If you could shed the spam somehow you'd be doing a lot better.
So yea, stating the obvious, I know. best of luck to you and hope the site recovers!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
"Avoid Too Many Internal Links" when you have a mega menu
Using the on-page grader and whilst further investigating internal linking, I'm concerned that as the ecommerce website has a very link heavy mega menu the rule of 100 may be impeding on the contextual links we're creating. Clearly we don't want to no-follow our entire menu. Should we consider no-indexing the third-level- for example short sleeve shirts here... Clothing > Shirts > Short Sleeve Shirts What about other pages we're don't care to index anyway such as the 'login page' the 'cart' the search button? Any thoughts appreciated.
Intermediate & Advanced SEO | | Ant-Scarborough0 -
Google indexed "Lorem Ipsum" content on an unfinished website
Hi guys. So I recently created a new WordPress site and started developing the homepage. I completely forgot to disallow robots to prevent Google from indexing it and the homepage of my site got quickly indexed with all the Lorem ipsum and some plagiarized content from sites of my competitors. What do I do now? I’m afraid that this might spoil my SEO strategy and devalue my site in the eyes of Google from the very beginning. Should I ask Google to remove the homepage using the removal tool in Google Webmaster Tools and ask it to recrawl the page after adding the unique content? Thank you so much for your replies.
Intermediate & Advanced SEO | | Ibis150 -
Does redirecting from a "bad" domain "infect" the new domain?
Hi all, So a complicated question that requires a little background. I bought unseenjapan.com to serve as a legitimate news site about a year ago. Social media and content growth has been good. Unfortunately, one thing I didn't realize when I bought this domain was that it used to be a porn site. I've managed to muck out some of the damage already - primarily, I got major vendors like Macafee and OpenDNS to remove the "porn" categorization, which has unblocked the site at most schools & locations w/ public wifi. The sticky bit, however, is Google. Google has the domain filtered under SafeSearch, which means we're losing - and will continue to lose - a ton of organic traffic. I'm trying to figure out how to deal with this, and appeal the decision. Unfortunately, Google's Reconsideration Request form currently doesn't work unless your site has an existing manual action against it (mine does not). I've also heard such requests, even if I did figure out how to make them, often just get ignored for months on end. Now, I have a back up plan. I've registered unseen-japan.com, and I could just move my domain over to the new domain if I can't get this issue resolved. It would allow me to be on a domain with a clean history while not having to change my brand. But if I do that, and I set up 301 redirects from the former domain, will it simply cause the new domain to be perceived as an "adult" domain by Google? I.e., will the former URL's bad reputation carry over to the new one? I haven't made a decision one way or the other yet, so any insights are appreciated.
Intermediate & Advanced SEO | | gaiaslastlaugh0 -
What is best practice for "Sorting" URLs to prevent indexing and for best link juice ?
We are now introducing 5 links in all our category pages for different sorting options of category listings.
Intermediate & Advanced SEO | | lcourse
The site has about 100.000 pages and with this change the number of URLs may go up to over 350.000 pages.
Until now google is indexing well our site but I would like to prevent the "sorting URLS" leading to less complete crawling of our core pages, especially since we are planning further huge expansion of pages soon. Apart from blocking the paramter in the search console (which did not really work well for me in the past to prevent indexing) what do you suggest to minimize indexing of these URLs also taking into consideration link juice optimization? On a technical level the sorting is implemented in a way that the whole page is reloaded, for which may be better options as well.0 -
Do I need to re-index the page after editing URL?
Hi, I had to edit some of the URLs. But, google is still showing my old URL in search results for certain keywords, which ofc get 404. By crawling with ScremingFrog it gets me 301 'page not found' and still giving old URLs. Why is that? And do I need to re-index pages with new URLs? Is 'fetch as Google' enough to do that or any other advice? Thanks a lot, hope the topic will help to someone else too. Dusan
Intermediate & Advanced SEO | | Chemometec0 -
Pipe ("|") in my website's title is being replaced with ":" in Google results
Hi , One of the websites I'm promoting and working on is www.pau-brasil.co.il.
Intermediate & Advanced SEO | | Kadel
It's wordpress-based website and as you can see the html's Title is "PauBrasil | some hebrew slogan".
(Screenshot: http://i.imgur.com/2f80EEY.gif)
When I'm searching for "PauBrasil" (Which is the brand's name) , one of the results google shows is "PauBrasil: Some Hebrew Slogan" (Screenshot: http://i.imgur.com/eJxNHrO.gif ) Why does the pipe is being replaced with ":" ?
And not just that , as you can see there's a "blank space" missing between the the ":" to the slogan.
(note: the websites has been indexed by google crawler at least 4 times so I find it hard to believe it can be the reason) I've keep on looking and found out that there's another page in that website with the exact same title
but when I'm looking for it in google , it shows the title as it really is , with pipe. ("|").
(Screenshot: http://i.imgur.com/dtsbZV2.gif) Have you ever encountered something like that?
Can it be that the duplicated title cause that weird "replacement"? Thanks in advance,
Kadel0 -
Our login pages are being indexed by Google - How do you remove them?
Each of our login pages show up under different subdomains of our website. Currently these are accessible by Google which is a huge competitive advantage for our competitors looking for our client list. We've done a few things to try to rectify the problem: - No index/archive to each login page Robot.txt to all subdomains to block search engines gone into webmaster tools and added the subdomain of one of our bigger clients then requested to remove it from Google (This would be great to do for every subdomain but we have a LOT of clients and it would require tons of backend work to make this happen.) Other than the last option, is there something we can do that will remove subdomains from being viewed from search engines? We know the robots.txt are working since the message on search results say: "A description for this result is not available because of this site's robots.txt – learn more." But we'd like the whole link to disappear.. Any suggestions?
Intermediate & Advanced SEO | | desmond.liang1 -
Getting Pages Requiring Login Indexed
Somehow certain newspapers' webpages show up in the index but require login. My client has a whole section of the site that requires a login (registration is free), and we'd love to get that content indexed. The developer offered to remove the login requirement for specific user agents (eg Googlebot, et al.). I am afraid this might get us penalized. Any insight?
Intermediate & Advanced SEO | | TheEspresseo0