"Equity sculpting" with internal nofollow links
-
I’ve been trying a couple of new site auditor services this week and they have both flagged the fact that I have some nofollow links to internal pages.
I see this subject has popped up from time to time in this community. I also found a 2013 Matt Cutts video on the subject:
https://searchenginewatch.com/sew/news/2298312/matt-cutts-you-dont-have-to-nofollow-internal-links
At a couple of SEO conferences I’ve attended this year, I was advised that nofollow on internal links can be useful so as not to squander link juice on secondary (but necessary) pages. I suspect many websites have a lot of internal links in their footers and are sharing the love with pages which don’t really need to be boosted. These pages can still be indexed but not given a helping hand to rank by strong pages. This “equity sculpting” (I made that up) seems to make sense to me, but am I missing something?
Examples of these secondary pages include login pages, site maps (human readable), policies – arguably even the general contact page.
Thoughts?
Regards,
Warren -
Useful reference links. Many thanks, Mike.
-
Here's a bit more on the subject.
Matt Cutts PageRank Sculpting 2009
TheSEMPost 2015 - Pagerank sculpting
The SEOBlog Pagerank Sculpting 2014
It just feels like every other year or so, this concept starts coming back up. Except as much as it does work, it also doesn't. Personally I think its a better use of time and effort to look at your site navigation & see if it's user friendly, intuitive, and natural in order to direct flow better and also to work on linkbuilding efforts to increase authority.
-
Thanks, Mike.
Just to be clear, I still want those non-primary internal pages (maybe not human sitemap and login) to be indexed so a robots.txt approach will not completely solve the problem. I just don't want to potentially squander link juice on secondary pages. Footers tend to have quite a bulk of link so there is a lot of dilution there. I had hoped that by halving my links, I'd be doubling the outbound link equity.
The first reference was useful, but only mentions my sculpting goal in the very last sentence without elaborating. The thing I found most interesting was the first comment from Mark Traphagen:
So, if this is true, there's absolutely no equity saving to be had from nofollow'ing internal links to my non-primary pages. But... is it true?! Any experiment results out there?
Finally, with regards to old versions of policies being published, I can't see how that would cause any legal problems. It's the version that is published that is important and, while I can set directives on cache expiry, nobody can be responsible for out-of-date information stored in a third-party cache (unless, of course, it was unlawful at the time of publishing).
-
Adding Nofollow to a handful of links on your site will not magically sculpt link equity in such a way as to create a noticeable improvement like that. If anything, you could just use robots.txt to remove those pages from being crawled. The bots don't necessarily need to index your login page, your human sitemap (if they already have their own), policies (which can change and cause legal issues if an older version is cached), and a few others.
And just a few months ago Gary Illyes stated that there's no good reason to nofollow internal links:
http://www.thesempost.com/google-dont-ever-nofollow-your-own-internal-links/
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can I stop a tracking link from being indexed while still passing link equity?
I have a marketing campaign landing page and it uses a tracking URL to track clicks. The tracking links look something like this: http://this-is-the-origin-url.com/clkn/http/destination-url.com/ The problem is that Google is indexing these links as pages in the SERPs. Of course when they get indexed and then clicked, they show a 400 error because the /clkn/ link doesn't represent an actual page with content on it. The tracking link is set up to instantly 301 redirect to http://destination-url.com. Right now my dev team has blocked these links from crawlers by adding Disallow: /clkn/ in the robots.txt file, however, this blocks the flow of link equity to the destination page. How can I stop these links from being indexed without blocking the flow of link equity to the destination URL?
Technical SEO | | UnbounceVan0 -
Transferring link juice on a page with over 150 links
I'm building a resource section that will probably, hopefully, attract a lot of external links but the problem here is that on the main index page there will be a big number of links (around 150 internal links - 120 links pointing to resource sub-pages and 30 being the site's navigational links), so it will dilute the passed link juice and possibly waste some of it. Those 120 sub-pages will contain about 50-100 external links and 30 internal navigational links. In order to better visualise the matter think of this resource as a collection of hundreds of blogs categorised by domain on the index page (those 120 sub-pages). Those 120 sub-pages will contain 50-100 external links The question here is how to build the primary page (the one with 150 links) so it will pass the most link juice to the site or do you think this is OK and I shouldn't be worried about it (I know there used to be a roughly 100 links per page limit)? Any ideas? Many thanks
Technical SEO | | flo20 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
Rel="canonical" of .html/ to .html
Hi, could you guys confirm me that the following scenario is completely senseless? I just got the instruction from an external consultant (with quiet good SEO knowledge) to use a rel="canonical" for the following urls. http://www.example.com/petra.html/
Technical SEO | | petrakraft
to
http://www.example.com/petra.html I mean a folder petra/ to petra is ok - but a trailing slash after .html ??? Apart from that I would rather choose a 301 - not a rel canonical. What is your position here?0 -
INTERNAL ANCHOR TEXT LINKS
If your site has say 50 pages, and you have a anchor text link from the home page to that page, what should you do in response to last friday, I have 60 keywords in the top 10 and now thye are all in the top 30 at best. PAGE RANK is still 5s and 6s on all of these pages.... NO PROBLEM ON THIS SITE UNTIL LAST FRIDAY!
Technical SEO | | jdcline0 -
Internal Links not Crawled by Open Site Explorer
Can someone plz tell me why www.hotelelgreco.gr has only 2 internal links in OSE despite the fact that the text content has a plethora of them. Thanks in advance.
Technical SEO | | socrateskirtsios0 -
Is this a good link?
Found a .gov link to my website www.kars4kids.org. The url it links to is http://www.nyc.gov/cgi-bin/exit.pl?url=http://www.kars4kids.org/ which does eventually redirect to kars4kids. Will search engines see this as a link?
Technical SEO | | Morris770 -
Mapping Internal Links (Which are causing duplicate content)
I'm working on a site that is throwing off a -lot- of duplicate content for its size. A lot of it appears to be coming from bad links within the site itself, which were caused when it was ported over from static HTML to Expression Engine (by someone else). I'm finding EE an incredibly frustrating platform to work with, as it appears to be directing 404's on sub-pages to the page directly above that subpage, without actually providing a 404 response. It's very weird. Does anyone have any recommendations on software to clearly map out a site's internal link structure so that I can find what bad links are pointing to the wrong pages?
Technical SEO | | BedeFahey0