Does a Single Instance of rel="nofollow" cause all instances on a page to be nofollowed?
-
I attended the Bruce Clay training at SMX Advanced Seattle, and he mentioned link pruning/sculpting (here's an SEOMoz article about it - http://www.seomoz.org/blog/google-says-yes-you-can-still-sculpt-pagerank-no-you-cant-do-it-with-nofollow)
Now during his presentation he mentioned that if you have one page with multiple links leading to another page, and one of those links is nofollowed, it could cause all links to be nofollowed.
Example:
Page A has 4 links to Page B: 1:followed, 2:followed, 3:nofollowed, 4:followed
The presence of a single nofollow tag would override the 3 followed links and none of them would pass link juice.
Has anyone else encountered this problem, and Is there any evidence to support this? I'm thinking this would make a great experiment.
-
has anyone got any further information or evidence about this ? I was under the impression its handled on a per link basis.
-
maybe a NO INDEX, NO FOLLOW would cause it. But I don't think a no follow external link would cause the others to turn into no follows.
-
I have had an instance where the upper menu had a nofollow, but the lower footer links did not and it didn't seem to matter, juice still went through. This was not the intention as one of them were overlooked.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Something happened within the last 2 weeks on our WordPress-hosted site that created "duplicates" by counting www.company.com/example and company.com/example (without the 'www.') as separate pages. Any idea what could have happened, and how to fix it?
Our website is running through WordPress. We've been running Moz for over a month now. Only recently, within the past 2 weeks, have we been alerted to over 100 duplicate pages. It appears something happened that created a duplicate of every single page on our site; "www.company.com/example" and "company.com/example." Again, according to our MOZ, this is a recent issue. I'm almost certain that prior to a couple of weeks ago, there existed both forms of the URL that directed to the same page without be counting as a duplicate. Thanks for you help!
Intermediate & Advanced SEO | | wzimmer0 -
72KB CSS code directly in the page header (not in external CSS file). Done for faster "above the fold" loading. Any problem with this?
To optimize for googles page speed, our developer has moved the 72KB CSS code directly in the page header (not in external CCS file). This way the above the fold loading time was reduced. But may this affect indexing of the page or have any other negative side effects on rankings? I made a quick test and google cache seems to have our full pages cached, but may it affect somehow negatively our rankings or that google indexes fewer of our pages (here we have some problems with google ignoring about 30% of our pages in our sitemap".)
Intermediate & Advanced SEO | | lcourse0 -
I've got duplicate pages. For example, blog/page/2 is the same as author/admin/page/2\. Is this something I should just ignore, or should I create the author/admin/page2 and then 301 redirect?
I'm going through the crawl report and it says I've got duplicate pages. For example, blog/page/2 is the same as author/admin/page/2/ Now, the author/admin/page/2 I can't even find in WordPress, but it is the same thing as blog/page/2 nonetheless. Is this something I should just ignore, or should I create the author/admin/page2 and then 301 redirect it to blog/page/2?
Intermediate & Advanced SEO | | shift-inc0 -
"Starting Over" With A New Domain & 301 Redirect
Hello, SEO Gurus. A client of mine appears to have been hit on a non-manual/algorithm penalty. The penalty appears to be Penguin-like, and the client never received any message (not that that means it wasn't manual). Prior to my working with her, she engaged in all kinds of SEO fornication: spammy links on link farms, shoddy article marketing, blog comment spam -- you name it. There are simply too many tens of thousands of these links to have removed. I've done some disavowal, but again, so much of the link work is spam. She is about to launch a new site, and I am tempted to simply encourage her to buy a new domain and start over. She competes in a niche B2B sector, so it is not terribly competitive, and with solid content and link earning, I think she'd be ok. Here's my question: If we were to 301 the old website to the new one, would the flow of page rank outperform any penalty associated with the site? (The old domain only has a PR of 2). Anyone like my idea of starting over, rather than trying to "recover?" I thank you all in advance for your time and attention. I don't take it for granted.
Intermediate & Advanced SEO | | RCNOnlineMarketing0 -
Rel=author, google plus, picture in Article page SERP
Hello, Could someone explain the easiest way to use Google Plus and rel="author" to claim our articles written by us and get our picture beside them in the Google SERPS site: nlpca(dot)com
Intermediate & Advanced SEO | | BobGW0 -
Blocking Pages Via Robots, Can Images On Those Pages Be Included In Image Search
Hi! I have pages within my forum where visitors can upload photos. When they upload photos they provide a simple statement about the photo but no real information about the image,definitely not enough for the page to be deemed worthy of being indexed. The industry however is one that really leans on images and having the images in Google Image search is important to us. The url structure is like such: domain.com/community/photos/~username~/picture111111.aspx I wish to block the whole folder from Googlebot to prevent these low quality pages from being added to Google's main SERP results. This would be something like this: User-agent: googlebot Disallow: /community/photos/ Can I disallow Googlebot specifically rather than just using User-agent: * which would then allow googlebot-image to pick up the photos? I plan on configuring a way to add meaningful alt attributes and image names to assist in visibility, but the actual act of blocking the pages and getting the images picked up... Is this possible? Thanks! Leona
Intermediate & Advanced SEO | | HD_Leona0 -
No equivalent page to re-direct to for highly trafficked pages, what should we do?
We have several old pages on our site that we want to get rid of, but we don't want to 404 them since they have decent traffic numbers. Would it be fine to set up a 301 re-direct from all of these pages to our home page? I know the best option is to find an equivalent page to re-direct to, but there isn't a great equivalent.
Intermediate & Advanced SEO | | nicole.healthline0 -
How Rel=Prev & Rel=Next work for me?
I have implemented Rel=Prev & Rel=Next tag on my website. I would like to give example URL to know more about it. http://www.vistapatioumbrellas.com/market-umbrellas?limit=40&p=3 http://www.vistapatioumbrellas.com/market-umbrellas?limit=40&p=4 http://www.vistapatioumbrellas.com/market-umbrellas?limit=40&p=5 Right now, I have blocked paginated pages by Robots.txt by following query. Disallow: /*?p= I have removed disallow syntax from Robots.txt for paginated pages. But, I have confusion with duplicate page title. If you will check all 3 pages so you will find out duplicate page title across all pages. I know that, duplicate page title is harmful for SEO. Will Google crawl + index all paginated pages? If yes so which page will get maximum benefits in organic ranking? Is there any specific way which may help me to solve this issue?
Intermediate & Advanced SEO | | CommercePundit0