Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Does having alot of pages with noindex and nofollow tags affect rankings?
-
We are an e-commerce marketplace at for alternative fashion and home decor. We have over 1000+ stores on the marketplace. Early this year, we switched the website from HTTP to HTTPS in March 2018 and also added noindex and nofollow tags to the store about page and store policies (mostly boilerplate content)
Our traffic dropped by 45% and we have since not recovered. We have done
I am wondering could these tags be affecting our rankings?
-
Hi Gaston
Thank you for the detailed response and suggestions. I will follow up with my findings. Point 3 and 4; - I think there is something there.
James
-
Hi James,
Great that you've checked out those items and there aren't errors.
I'd break my response into bullet points so its easier to respond
1- I'm bugged that the traffic loss occurs in the same month as the https redirection.
That completely tells me that you've either killed, redirected or noindexed some pages that drove a lot of traffic.
2- Also it could be possible that you didn't deserve that much traffic due to either being ranked on searches that you weren't relevant or Google didn't fully understand your site. That often happens when migration takes places, as Google needs to re-calculate and fully understand the new site.3- If you have still on the old HTTP search Console property, I'd check as many (and in some scalable way) keywords as possible, trying to find which have fallen out in rankings.
4- When checking those keywords, compare URLs that were ranked, there could be some changes.5- And lastly, have you made sure that there aren't any indexation and/or Crawlability issues? Check the raw number of indexable URLs and compare it with the number that Search Console shows in the index coverage report.
Best wishes.
GR -
Hi Gaston
Thank you for sharing your insights.
1. I have looked through all the pages and made sure we have not noindexed important pages
2. The migration went well; no double redirects or duplicate content.
3. I looked through Google search console - Fixed all the errors; (mostly complains about 404 error caused by products that are out of stock or from vendors who leave the website)
4. A friend said he thinks our pages are over-optimized - and hence that could be the reason; We went ahead and tweaked all the pages that were driving traffic; but change.
If you have a moment here is our website: www.rebelsmarket.com - If there is anything that standsout please let me know. I appreciate your help
James
-
Hi Joe
We have applied all the redirects carefully and tested them to make sure; we have no duplicate content
The url: www.rebelsmarket.com
Redirect to SSL: March 2018 (we started with the blog and then moved to products page)
We added; noindex and nofollow tags at the sametime;
Thank you
James
-
Hi John
Sorry, I have been tied up with travel schedule. Here is the website www.rebelsmarket.com
Thank you for your help John
-
Hi James,
Yiut issues lie elsewhere - did anything else happen during the update? My first thoughts are that the redirects were incorrectly applied.
- Whats the URL?
- When was the redirect HTTP > HTTPS installed & how?
- When was noindex and nofollow tags added?
You're a month in, so you should be able to recover. Sharing the URL would be useful if you need any further assistance.
-
Hey James - would you be comfortable sharing the URL? I can run some diagnostics on it to see what other issues could be the cause of the drop.
Thanks!
John
-
Hi James,
I'm sorry to hear that you've lost over 45% of your traffic.
Absolutely not, having a lot of noindex and nofollow pages won't affect your rankings and your SEO strength.On the other hand, a traffic drop could be related to many issues, some of them:
- Algorithm changes, there has been a lot of movement this year
- You've noindexed some of your high traffic pages
- Some part of the migration gone wrong
- And the list could be endless.
I'd start checking Search Console, there you could spot which keywords and/or URLs are those that aren't ranking that high.
It might come handy, this sort of tutorial on analyzing a traffic drop: How to Diagnose SEO Traffic Drops: 11 Questions to Answer - Moz Blog
Hope it helps.
Best luck.
GR
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is it good or bad to add noindex for empty pages, which will get content dynamically after some days
We have followers, following, friends, etc pages for each user who creates account on our website. so when new user sign up, he may have 0 followers, 0 following and 0 friends, but over period of time he can get those lists go up. we have different pages for followers, following and friends which are allowed for google to index. When user don't have any followers/following/friends, those pages looks empty and we get issue of duplicate content and description too short. so is it better that we add noindex for those pages temporarily and remove noindex tag when there are at least 2 or more people on those pages. What are side effects of adding noindex when there is no data on those page or benefits of it?
Intermediate & Advanced SEO | | swapnil120 -
Why is our noindex tag not working?
Hi, I have the following page where we've implemented a no index tag. But when we run this page in screaming frog or this tool here to verify the noidex is present and functioning, it shows that it's not. But if you view the source of the page, the code is present in the head tag. And unfortunately we've seen instances where Google is indexing pages we've noindexed. Any thoughts on the example above or why this is happening in Google? Eddy
Intermediate & Advanced SEO | | eddys_kap0 -
Landing pages for paid traffic and the use of noindex vs canonical
A client of mine has a lot of differentiated landing pages with only a few changes on each, but with the same intent and goal as the generic version. The generic version of the landing page is included in navigation, sitemap and is indexed on Google. The purpose of the differentiated landing pages is to include the city and some minor changes in the text/imagery to best fit the Adwords text. Other than that, the intent and purpose of the pages are the same as the main / generic page. They are not to be indexed, nor am I trying to have hidden pages linking to the generic and indexed one (I'm not going the blackhat way). So – I want to avoid that the duplicate landing pages are being indexed (obviously), but I'm not sure if I should use noindex (nofollow as well?) or rel=canonical, since these landing pages are localized campaign versions of the generic page with more or less only paid traffic to them. I don't want to be accidentally penalized, but I still need the generic / main page to rank as high as possible... What would be your recommendation on this issue?
Intermediate & Advanced SEO | | ostesmorbrod0 -
Fresh page versus old page climbing up the rankings.
Hello, I have noticed that if publishe a webpage that google has never seen it ranks right away and usually in a descend position to start with (not great but descend). Usually top 30 to 50 and then over the months it slowly climbs up the rankings. However, if my page has been existing for let's say 3 years and I make changes to it, it takes much longer to climb up the rankings Has someone noticed that too ? and why is that ?
Intermediate & Advanced SEO | | seoanalytics0 -
Do Page Anchors Affect SEO?
Hi everyone, I've been researching for the past hour and I cannot find a definitive answer anywhere! Can someone tell me if page anchors affect SEO at all? I have a client that has 9 page anchors on one landing page on their website - which means if you were to scroll through their website, the page is really really long! I always thought that by using page anchors instead of sending users through to a dedicated landing page, ranking for those keywords makes it harder because a search spider will read all the content on that landing page and not know how to rank for individual keywords? Am I wrong? The client in particular sells furniture, so on their landing page they have page anchors that jump the user down to "tables" or "chairs" or "lighting" for example. You can then click on one of the product images listed in that section of the page anchor and go through to an individual product page. Can anyone shed any light on this? Thanks!
Intermediate & Advanced SEO | | Virginia-Girtz1 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Meta NoIndex tag and Robots Disallow
Hi all, I hope you can spend some time to answer my first of a few questions 🙂 We are running a Magento site - layered/faceted navigation nightmare has created thousands of duplicate URLS! Anyway, during my process to tackle the issue, I disallowed in Robots.txt anything in the querystring that was not a p (allowed this for pagination). After checking some pages in Google, I did a site:www.mydomain.com/specificpage.html and a few duplicates came up along with the original with
Intermediate & Advanced SEO | | bjs2010
"There is no information about this page because it is blocked by robots.txt" So I had added in Meta Noindex, follow on all these duplicates also but I guess it wasnt being read because of Robots.txt. So coming to my question. Did robots.txt block access to these pages? If so, were these already in the index and after disallowing it with robots, Googlebot could not read Meta No index? Does Meta Noindex Follow on pages actually help Googlebot decide to remove these pages from index? I thought Robots would stop and prevent indexation? But I've read this:
"Noindex is a funny thing, it actually doesn’t mean “You can’t index this”, it means “You can’t show this in search results”. Robots.txt disallow means “You can’t index this” but it doesn’t mean “You can’t show it in the search results”. I'm a bit confused about how to use these in both preventing duplicate content in the first place and then helping to address dupe content once it's already in the index. Thanks! B0 -
Any penalty for having rel=canonical tags on every page?
For some reason every webpage of our website (www.nathosp.com) has a rel=canonical tag. I'm not sure why the previous SEO manager did this, but we don't have any duplicate content that would require a canonical tag. Should I remove these tags? And if so, what's the advantage - or disadvantage of leaving them in place? Thank you in advance for your help. -Josh Fulfer
Intermediate & Advanced SEO | | mhans1