Noindex,follow is a waste of link juice?
-
On my wordpress shopping cart plugin, I have three pages /account, /checkout and /terms on which I have added “noindex,follow” attribute. But I think I may be wasting link juice on these pages as they are not to be indexed anyway, so is there any point giving them any link juice? I can add “noindex,nofollow” on to the page itself. However, the actual text/anchor link to these pages on the site header will remain “follow” as I have no means of amending that right now. So this presents the following two scenarios – No juice flows from homepage to these 3 pages (GOOD) – This would be perfect then, as the pages themselves have nofollow attribute. Juice flows from homepage to these pages (BAD) - This may mean that the juice flows from homepage anchor text links to these 3 pages BUT then STOPS there as they have “nofollow” attribute on that page. This will be a bigger problem and if this is the case and I cant stop the juice from flowing in, then ill rather let it flow out to other pages. Hope you understand my question, any input is very much appreciated. Thanks
-
If you no index a page, link juice will flow to that page still. if you no follow it, it will still flow but will not flow out of it again.
you should always add noindex,follow if you want the link juice to return to your index pages. Even then some link juice will be lost that stays on that noindex page
I tried also could not find it. but here is a quote from Matt Cutts "Eric Enge: Can a NoIndex page accumulate PageRank?
Matt Cutts: A NoIndex page can accumulate PageRank, because the links are still followed outwards from a NoIndex page.
Eric Enge: So, it can accumulate and pass PageRank.
Matt Cutts: Right, and it will still accumulate PageRank, but it won't be showing in our Index. So, I wouldn't make a NoIndex page that itself is a dead end. You can make a NoIndex page that has links to lots of other pages.
For example you might want to have a master Sitemap page and for whatever reason NoIndex that, but then have links to all your sub Sitemaps.
Eric Enge: Another example is if you have pages on a site with content that from a user point of view you recognize that it's valuable to have the page, but you feel that is too duplicative of content on another page on the site
That page might still get links, but you don't want it in the Index and you want the crawler to follow the paths into the rest of the site.
Matt Cutts: That's right. Another good example is, maybe you have a login page, and everybody ends up linking to that login page. That provides very little content value, so you could NoIndex that page, but then the outgoing links would still have PageRank.
Now, if you want to you can also add a NoFollow metatag, and that will say don't show this page at all in Google's Index, and don't follow any outgoing links, and no PageRank flows from that page. We really think of these things as trying to provide as many opportunities as possible to sculpt where you want your PageRank to flow, or where you want Googlebot to spend more time and attention."
http://www.stonetemple.com/articles/interview-matt-cutts.shtml
-
I just wanted to share I completely agree with EGOL and the understanding he shared. I skipped responding to this question because I didn't want to respond with all the explanation of the disclaimers, where EGOL tackled the question anyway and offered great details in both the original reply and follow up.
-
Great answer, and in this specific case, i have "noindex, follow" attribute on my pages too that i do not want to be indexed.
Regarding competitors - I study them, onsite and link profiles, specially the successful ones to learn from it. Most of the SEO strategies ive learned have been by reading forums / blogs etc. Quite often people have conflicting views there. So i try to find real life examples of stuff that is quite likely working for a successful site, try to see a pattern in there, and where i spot one, i try to implement that on my sites.
You on the other hand - have experience and proven philosophies :), something i am dying to acquire.
Thanks
-
Here is a philosophy that I have... (I am not trying to be a wise guy... just sayin'....)
I don't pay a lot of attention to the methods used by my competitors. Instead I decide what I think will work best for me and then do it.
Right now I have pages on my site that I don't want in the search engines index. So I have code on them as follows....
name="robots" content="noindex, follow" />
I believe that code keeps them out of the index but allows pagerank to flow through them to other pages. I offer that here so that anyone can tell me if it is wrong.
I welcome anyone who can set me straight or anyone who can suggest a better method.
However, I am not going to look at my competitors and try to figure out what they are doing because there is a very good chance that they don't know what they are doing. (I think your competitors don't know what they are doing.)
I have absolutely no problem with doing things differently from my competitors. In fact I think that mimicking them is the best way to finish behind them.
-
EGOL, thank you so much for your input, i really value your opinion. However, i have a follow up question, and i maybe muddled up with things here, but here it is -
Many of my successful competitors in various niches have added rel=nofollow to certain internal pages.
For example -
1. On homepage of this wordpress site, the anchor text link to wp tag pages have rel=nofollow. The tag pages themselves are "noindex,follow".
2. Also all links in the header are rel=nofollow. The only follow links are post pages, and post pages are being used for navigation.
Any page that has a rel=nofollow anchor text is "noindex,follow" itself. Nowhere a "noindex" has been added to a wholepage, its only on certain anchor text links.
Is that slightly different from making the whole page nofollow? because here only pages are being stopped from getting any link juice.
-
I am going to explain how I understand this. I could be wrong on some of the details because of two different reasons.... 1) I simply am wrong... or .... 2) I am correct according to what search engines have said in public but they are doing something different in practice.
When nofollow was first introduced a lot of people used it to "sculpt" the flow of pagerank. They were told at that time by some search engine employees that pagerank did not flow into nofollowed pages. That is how search engines who made public statements about it were supposed to be treating it in the beginning.
Later we learned that google (and maybe other) search engines changed their mind on how they handle nofollow and that change was to evaporate ALL pagerank that would have flowed into a nofollow link. In that situation it would be a bad idea to use nofollow because the pagerank was permanently lost.
Do they still handle nofollow links that way? I don't know.
However...... how I currently understand it is that if you designate a page as noindex / follow then pagerank flows into that page and through the links on that page. This would conserve any pass-through pagerank but would result in a loss of any pagerank that is retained in that page (or maybe it all passes through since the page is no index - I don't know).
So, if I had pages that I wanted to link to on my site but didn't want in the index I would use noindex / follow to allow the pagerank that flows into those pages to pass through to other pages on my site. But I would never be sure that it really works that way. Also, keep in mind that there are numerous search engines and there could be many different ways of treating these links - and pagerank is a substance unique to google.
If anyone understands this differently or suspect that it does not work as explained, please let us know.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Internal links to preferential pages
Hi all, I have question about internal linking and canonical tags. I'm working on an ecommerce website which has migrated platform (shopify to magento) and the website design has been updated to a whole new look. Due to the switch to magento, the developers have managed to change the internal linking structure to product pages. The old set up was that category pages (on urls domain.com/collections/brand-name) for each brand would link to products via the following url format: domain.com/products/product-name . This product url was the preferential version that duplicate product pages generated by shopify would have their canonical tags pointing to. This set up was working fine. Now what's happened is that the category pages have been changed to link to products via dynamically generated urls based on the user journey. So products are now linked to via the following urls: domain.com/collection/brand-name/product-name . These new product pages have canonical tags pointing back to the original preferential urls (domain.com/products/product-name). But this means that the preferential URLs for products are now NOT linked to anywhere on the website apart from within canonical tags and within the website's sitemap. I'm correct in thinking that this definitely isn't a good thing, right? I've actually noticed Google starting to index the non-preferential versions of the product pages in addition to the preferential versions, so it looks like Google perhaps is ignoring the canonical tags as there are so many internal links pointing to non-preferential pages, and no on-site links to the actual preferential pages? I've recommended to the developers that they change this back to how it was, where the preferential product pages (domain.com/products/product-name) were linked to from collection pages. I just would like clarification from the Moz community that this is the right call to make? Since the migration to the new website & platform we've seen a decrease in search traffic, despite all redirects being set up. So I feel that technical issues like this can't be doing the website any favours at all. If anyone could help out and let me know if what I suggested is correct then that would be excellent. Thank you!
Intermediate & Advanced SEO | | Guy_OTS0 -
Link from Google.com
Hi guys I've just seen a website get a link from Google's Webmaster Snippet testing tool. Basically, they've linked to a results page for their own website test. Here's an example of what this would look like for a result on my website. http://www.google.com/webmasters/tools/richsnippets?q=https%3A%2F%2Fwww.impression.co.uk There's a meta nofollow, but I just wondered what everyone's take is on Trust, etc, passing down? (Don't worry, I'm not encouraging people to go out spamming links to results pages!) Looking forward to some interesting responses!
Intermediate & Advanced SEO | | tomcraig860 -
Link Juice + multiple links pointing to the same page
Scenario
Intermediate & Advanced SEO | | Mark_Ch
The website has a menu consisting of 4 links Home | Shoes | About Us | Contact Us Additionally within the body content we write about various shoe types. We create a link with the anchor text "Shoes" pointing to www.mydomain.co.uk/shoes In this simple example, we have 2 instances of the same link pointing to the same url location.
We have 4 unique links.
In total we have 5 on page links. Question
How many links would Google count as part of the link juice model?
How would the link juice be weighted in terms of percentages?
If changing the anchor text in the body content to say "fashion shoes" have a different impact? Any other advise or best practice would be appreciated. Thanks Mark0 -
Should sitewise links always be branded
Hello, A client has 8 sitewide links with the same anchor text - their main keyword. They have 84 linking root domains total. 3 of the 8 are his own sites. 4 of the sitewide links are in the footer (all 3 of his own sites interlink in the footer) In the last 5 months, rankings for the the top 2 short-tail keywords have dropped even though they should rank higher. In a few days they're going to do some major rearranging with one of the 8, adding nofollows sitewide because of a partnership disagreement. Would there be any negative consequences, do you think, to right away changing all of the footer links of the 3 sites that the site owner owns to branded anchor text (domain.com)? Should we change all the sitewides to branded anchor text? There have been no problems in GWT.
Intermediate & Advanced SEO | | BobGW0 -
How much link juice could be passed?
When evaluating a site to decide whether or not to peruse a link, how do you decide if it is passing enough link juice to peruse the matter?
Intermediate & Advanced SEO | | runnerkik0 -
Need to migrate multiple URLs and trying to save link juice
I have an interesting problem SEOmozers and wanted to see if I could get some good ideas as to what I should to for the greatest benefit. I have an ecommerce website that sells tire sensors. We just converted the old site to a new platform and payment processor, so the site has changed completely from the original, just offering virtually the same products as before. You can find it at www.tire-sensors.com We're ranked #1 for the keyword "tire sensors" in Google. We sell sensors for ford, honda, toyota, etc -- and tire-sensors.com has all of those listed. Before I came along, the company I'm working for also had individual "mini ecommerce" sites created with only 1 brand of sensors and the URL to match that maker. Example : www.fordtiresensors.com is our site, only sells the Ford parts from our main site, and ranks #1 in Google for "ford tire sensors" I don't have analytics on these old sites but Google Keyword Tool is saying "ford tire sensors" gets 880 local searches a month, and other brand-specific tire sensors are receiving traffic as well. We have many other sites that are doing the same thing. www.suzukitiresensors.com (ranked #2 for "suzuki tire sensors") Only sells our Suzuki collection from the main site's inventory etc We need to get rid of the old sites because we want to shut down the payment gateway and various other things those sites are using, and move to one consolidated system (aka www.tire-sensors.com) Would simply making each maker-specific URL (ie. fordtiresensors.com) 301 redirect to our main site (www.tire-sensors.com) give us to most benefit, rankings, traffic etc? Or would that be detrimental to what we're trying to do -- capturing the tire sensors market for all car manufacturers? Suggestions? Thanks a lot in advance! Jordan
Intermediate & Advanced SEO | | JordanGodbey0 -
Competitors and Directory Links
Hi guys, wanted to get some input and thoughts here. I'm analyzing many competitor links for a specific client (even other clients actually as well) and come across a pretty heavy directory backlink profiles. has anyone here had success with directory listings? Seem many of the competitors backlinks are coming from directories. What say you?
Intermediate & Advanced SEO | | PaulDylan1 -
Link Architecture - Xenu Link Sleuth Vs Manual Observation Confusion
Hi, I have been asked to complete some SEO contracting work for an e-commerce store. The Navigation looked a bit unclean so I decided to investigate it first. a) Manual Observation Within the catalogue view, I loaded up the page source and hit Ctrl-F and searched "href", turns out there's 750 odd links on this page, and most of the other sub catalogue and product pages also have about 750 links. Ouch! My SEO knowledge is telling me this is non-optimal. b) Link Sleuth I crawled the site with Xenu Link Sleuth and found 10,000+ pages. I exported into Open Calc and ran a pivot table to 'count' the number of pages per 'site level'. The results looked like this - Level Pages 0 1 1 42 2 860 3 3268 Now this looks more like a pyramid. I think is is because Link Sleuth can only read 1 'layer' of the Nav bar at a time - it doesnt 'hover' and read the rest of the nav bar (like what can be found by searching for "href" on the page source). Question: How are search spiders going to read the site? Like in (1) or in (2). Thankyou!
Intermediate & Advanced SEO | | DigitalLeaf0