Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Noindex,follow is a waste of link juice?
-
On my wordpress shopping cart plugin, I have three pages /account, /checkout and /terms on which I have added “noindex,follow” attribute. But I think I may be wasting link juice on these pages as they are not to be indexed anyway, so is there any point giving them any link juice? I can add “noindex,nofollow” on to the page itself. However, the actual text/anchor link to these pages on the site header will remain “follow” as I have no means of amending that right now. So this presents the following two scenarios – No juice flows from homepage to these 3 pages (GOOD) – This would be perfect then, as the pages themselves have nofollow attribute. Juice flows from homepage to these pages (BAD) - This may mean that the juice flows from homepage anchor text links to these 3 pages BUT then STOPS there as they have “nofollow” attribute on that page. This will be a bigger problem and if this is the case and I cant stop the juice from flowing in, then ill rather let it flow out to other pages. Hope you understand my question, any input is very much appreciated. Thanks
-
If you no index a page, link juice will flow to that page still. if you no follow it, it will still flow but will not flow out of it again.
you should always add noindex,follow if you want the link juice to return to your index pages. Even then some link juice will be lost that stays on that noindex page
I tried also could not find it. but here is a quote from Matt Cutts "Eric Enge: Can a NoIndex page accumulate PageRank?
Matt Cutts: A NoIndex page can accumulate PageRank, because the links are still followed outwards from a NoIndex page.
Eric Enge: So, it can accumulate and pass PageRank.
Matt Cutts: Right, and it will still accumulate PageRank, but it won't be showing in our Index. So, I wouldn't make a NoIndex page that itself is a dead end. You can make a NoIndex page that has links to lots of other pages.
For example you might want to have a master Sitemap page and for whatever reason NoIndex that, but then have links to all your sub Sitemaps.
Eric Enge: Another example is if you have pages on a site with content that from a user point of view you recognize that it's valuable to have the page, but you feel that is too duplicative of content on another page on the site
That page might still get links, but you don't want it in the Index and you want the crawler to follow the paths into the rest of the site.
Matt Cutts: That's right. Another good example is, maybe you have a login page, and everybody ends up linking to that login page. That provides very little content value, so you could NoIndex that page, but then the outgoing links would still have PageRank.
Now, if you want to you can also add a NoFollow metatag, and that will say don't show this page at all in Google's Index, and don't follow any outgoing links, and no PageRank flows from that page. We really think of these things as trying to provide as many opportunities as possible to sculpt where you want your PageRank to flow, or where you want Googlebot to spend more time and attention."
http://www.stonetemple.com/articles/interview-matt-cutts.shtml
- topic:timeago_earlier,2 years
-
I just wanted to share I completely agree with EGOL and the understanding he shared. I skipped responding to this question because I didn't want to respond with all the explanation of the disclaimers, where EGOL tackled the question anyway and offered great details in both the original reply and follow up.
-
Great answer, and in this specific case, i have "noindex, follow" attribute on my pages too that i do not want to be indexed.
Regarding competitors - I study them, onsite and link profiles, specially the successful ones to learn from it. Most of the SEO strategies ive learned have been by reading forums / blogs etc. Quite often people have conflicting views there. So i try to find real life examples of stuff that is quite likely working for a successful site, try to see a pattern in there, and where i spot one, i try to implement that on my sites.
You on the other hand - have experience and proven philosophies :), something i am dying to acquire.
Thanks
-
Here is a philosophy that I have... (I am not trying to be a wise guy... just sayin'....)
I don't pay a lot of attention to the methods used by my competitors. Instead I decide what I think will work best for me and then do it.
Right now I have pages on my site that I don't want in the search engines index. So I have code on them as follows....
name="robots" content="noindex, follow" />
I believe that code keeps them out of the index but allows pagerank to flow through them to other pages. I offer that here so that anyone can tell me if it is wrong.
I welcome anyone who can set me straight or anyone who can suggest a better method.
However, I am not going to look at my competitors and try to figure out what they are doing because there is a very good chance that they don't know what they are doing. (I think your competitors don't know what they are doing.)
I have absolutely no problem with doing things differently from my competitors. In fact I think that mimicking them is the best way to finish behind them.
-
EGOL, thank you so much for your input, i really value your opinion. However, i have a follow up question, and i maybe muddled up with things here, but here it is -
Many of my successful competitors in various niches have added rel=nofollow to certain internal pages.
For example -
1. On homepage of this wordpress site, the anchor text link to wp tag pages have rel=nofollow. The tag pages themselves are "noindex,follow".
2. Also all links in the header are rel=nofollow. The only follow links are post pages, and post pages are being used for navigation.
Any page that has a rel=nofollow anchor text is "noindex,follow" itself. Nowhere a "noindex" has been added to a wholepage, its only on certain anchor text links.
Is that slightly different from making the whole page nofollow? because here only pages are being stopped from getting any link juice.
-
I am going to explain how I understand this. I could be wrong on some of the details because of two different reasons.... 1) I simply am wrong... or .... 2) I am correct according to what search engines have said in public but they are doing something different in practice.
When nofollow was first introduced a lot of people used it to "sculpt" the flow of pagerank. They were told at that time by some search engine employees that pagerank did not flow into nofollowed pages. That is how search engines who made public statements about it were supposed to be treating it in the beginning.
Later we learned that google (and maybe other) search engines changed their mind on how they handle nofollow and that change was to evaporate ALL pagerank that would have flowed into a nofollow link. In that situation it would be a bad idea to use nofollow because the pagerank was permanently lost.
Do they still handle nofollow links that way? I don't know.
However...... how I currently understand it is that if you designate a page as noindex / follow then pagerank flows into that page and through the links on that page. This would conserve any pass-through pagerank but would result in a loss of any pagerank that is retained in that page (or maybe it all passes through since the page is no index - I don't know).
So, if I had pages that I wanted to link to on my site but didn't want in the index I would use noindex / follow to allow the pagerank that flows into those pages to pass through to other pages on my site. But I would never be sure that it really works that way. Also, keep in mind that there are numerous search engines and there could be many different ways of treating these links - and pagerank is a substance unique to google.
If anyone understands this differently or suspect that it does not work as explained, please let us know.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Passing link juice via javascript?
Hello Client got website with javascript generated content. All links there (from mainpage to some deeper page) are js generated. In code there're only javascripts and other basic typical code but no text links (<a href...="" ).<="" p=""></a> <a href...="" ).<="" p="">The question is: are those js links got the same "seo power" as typical html href links?.For example majestic.com can't scan website properly and can't show seo metrics for pages. I know google crawls them (links and pages) but are they as good as typical links?</a> <a href...="" ).<="" p="">Regards,</a>
Intermediate & Advanced SEO | Sep 8, 2017, 5:22 AM | PenaltyHammer0 -
Switching from Http to Https, but what about images and image link juice?
Hi Ya'll. I'm transitioning our http version website to https. Important question: Do images have to have 301 redirects? If so, how and where? Please send me a link or explain best practices. Best, Shawn
Intermediate & Advanced SEO | Sep 9, 2016, 2:57 PM | Shawn1241 -
Does a non-canonical URL pass link juice?
Our site received a great link from URL A, which was syndicated to URL B. But URL B is canonicalized to URL A. Does the link on URL B pass juice to my site? (See image below for a visual representation of my question) zgbzqBy
Intermediate & Advanced SEO | Jul 2, 2015, 4:01 PM | Choice1 -
Do I have to many internal links which is diluting link juice to less important pages
Hello Mozzers, I was looking at my homepage and subsequent category landing pages on my on my eCommerce site and wondered whether I have to many internal links which could in effect be diluting link juice to much of the pages I need it to flow. My homepage has 266 links of which 114 (43%) are duplicate links which seems a bit to much to me. One of my major competitors who is a national company has just launched a new site design and they are only showing popular categories on their home page although all categories are accessible from the menu navigation. They only have 123 links on their home page. I am wondering whether If I was to not show every category on my homepage as some of them we don't really have any sales from and only concerntrate on popular ones there like my competitors , then the link juice flowing downwards in the site would be concerntated as I would have less links for them to flow ?... Is that basically how it works ? Is there any negatives with regards to duplicate links on either home or category landing page. We are showing both the categories as visual boxes to select and they are also as selectable links on the left of a page ? Just wondered how duplicate links would be treated? Any thoughts greatly appreciated thanks Pete
Intermediate & Advanced SEO | May 22, 2015, 12:48 PM | PeteC120 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | Sep 3, 2014, 6:04 PM | khi50 -
Wikipedia links - any value?
Hello everyone. We recently posted some of our research to Wikipedia as references in the "External Links" section. Our research is rigorous and has been referenced by a number of universities and libraries (an example: https://www.harborcompliance.com/information/company-suffixes.php). Anyway, I'm wondering if these Wikipedia links have any value beyond of course adding to the Wiki page's information. Thanks!
Intermediate & Advanced SEO | Jan 28, 2014, 1:03 AM | Harbor_Compliance0 -
Link Juice + multiple links pointing to the same page
Scenario
Intermediate & Advanced SEO | Sep 19, 2013, 7:01 AM | Mark_Ch
The website has a menu consisting of 4 links Home | Shoes | About Us | Contact Us Additionally within the body content we write about various shoe types. We create a link with the anchor text "Shoes" pointing to www.mydomain.co.uk/shoes In this simple example, we have 2 instances of the same link pointing to the same url location.
We have 4 unique links.
In total we have 5 on page links. Question
How many links would Google count as part of the link juice model?
How would the link juice be weighted in terms of percentages?
If changing the anchor text in the body content to say "fashion shoes" have a different impact? Any other advise or best practice would be appreciated. Thanks Mark0 -
Link Age as SEO factor?
Hi Guys
Intermediate & Advanced SEO | Jun 23, 2011, 4:27 PM | VividLime
I have a client who ranks well within a competitive sector of the travel industry. They are planning CMS move which will involve changing from .cfm to .aspx We will be doing the standard redirects etc However Matt's statement here on 301 redirects got me thinking
http://www.youtube.com/watch?v=zW5UL3lzBOA&t=0m24s He says that basically you loose a bit of page rank when you do a 301 redirect. Now, we will be potentially redirecting 1000s of links and my thinking is 'a lot of a little, adds up to a lot' In other words, 1000s of redirects may have a big enough impact to loose some rankings in a very competitive and aggressive space. So recommended that we contact the sites who has the link highest value and ask them to manually change the links from cfm to aspx. This will then mean that there are no loss value as with a 301 redirect. -But now I have another dilemma which I'm unsure about. So the main question:
Is link age factor in rankings ? If I update any links, this will make said link new to Google, so if link age is a factor, would this also lessen the value passed initially?0