Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Link flow for multiple links to same URL
-
Hi there,
my question is as follows:How does Google handle link flow if two links in a given page point to the same URL? (do they flow link individually or not?)
This seems to be a newbie question, but actually it seems that there is little evidence and even also little consensus in the SEO community about this detail.
- Answers should include source
- Information about the current state of art at Google is preferable
- The question is not about anchor text, general best practises for linking, "PageRank is dead" etc.
We do know that the "historical" PageRank was implemented (a long time ago) without special handling for multiple links, as e.g. last stated by Matt Cutts in this video: http://searchengineland.com/googles-matt-cutts-one-page-two-links-page-counted-first-link-192718
On the other hand, many people from the SEO community say that only the first link counts. But so far I could not find any data to back this up, which is quite surprising.
-
I totally agree on the focus thing in general - it's not helpful to act with PageRank in mind when it comes to layout decisions etc.
But: For large websites (e.g. 100,000 pages and up) crawl rate, indexing and rankings of deeper parts of the site depend heavily on the internal link graph. Taking a deeper look at the internal link graph gives us a lot of useful information in these cases, does it?
Now: Think of links sitting in a template that gets used on 50,000 pages. A little change here is likely to cause quite a difference in the internal link graph.
For example I've run PageRank simulations with both models on a smaller website with only 1,500 pages / 100,000 links. For many pages, the little difference ends up with 20-30% more or less internal PageRank - for these individual pages, this could be crucial for crawling, indexation and rankings. Still not useful?
Since moz runs it's own iterative PR like algorithms: How do you guys handle this with mozRank / mozTrust? Which model leads to better correlations with rankings?
-
- The links both get PageRank flow...
- The link value gets divided, though, so it wouldn't exactly double the value.
- The link extraction process might choose to only select one link from the page based on certain factors (perhaps ignoring some links not because they are duplicative but based on location, or other qualifiers)
Here is Matt Cutts talking about this very issue. And here again. It is the closest thing we have to an answer.
I think the reason for the "first link counts" is really an extension of an understanding of PageRank. Let's say a page has 1 outbound link. It gets 100% of the value passable by that page. Now, let's say the page adds another link, but it is the exact same link. Now, each link gets 50%. The sum total is 100%. It is as if the 2nd link were never added. But, this calculation changes depending on the other links on the page. Let's say a page has 2 links on it. One to you, one to someone else. 50/50. If you get another, you jump to 67/33. Slightly better. As the page increases in number of links, your additional link approaches a doubling of the first link's value. So on one end of the spectrum it is valueless. On the other end of the spectrum it doubles.
The other question is whether anchor text is counted for all links. Some experimentation indicates that only the 1st anchor text matters. This might also indicate the selection / extraction process mentioned in #2.
That all being said, I think I agree with Matt Cutts on this one. This is such a small issue that you really should focus on bigger picture stuff. It is interesting, yes, but not particularly useful.
I hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL Structure on Category Pages
Hi, Currently, we having the following URL Structure o our product pages: All Products Pages: www.viatrading.com/wholesale/283/All_Products.html Category Page: www.viatrading.com/wholesale/4/Clothing.html Product Page: www.viatrading.com/wholesale/product/LOAD-HE-WOM/Assorted-High-End-Women-Clothing-Lots.html?cid=4 Since we are going to use another frontend system, we are thinking about re-working on this URL Structure, using something like this: All Products Pages: www.viatrading.com/wholesale-products/ Category Page: www.viatrading.com/wholesale-products/category/ Product Page: www.viatrading.com/wholesale-products/category/product-title/ I understand this is better for SEO and user experience. However, we already have good traffic on the current URL Structure. Should we use same left-side filters on Category Pages as in All Products Page? Since we are using Faceted Navigation, when users filter the Category (e.g. Clothing) they will see same page as Clothing Category Page. Is that an issue for Duplicate Content? Since we are a wholesale company - I understand is using "/wholesale/products/" in URL for all product pages a good idea? If so, should we avoid word "wholesale" in product-title to avoid repeated word in URL? For us, SKU in URL helps the company employees and maybe some clients identify the link. However, what do you think of using the SEO-friendly product-title, and 301 redirect it to www.viatrading.com/BRTA-LN-DISHRACKS/, so 1st link is only used by company members and Canonicalized 2nd is the only one seen by general public? Thank you,
On-Page Optimization | | viatrading10 -
Multiple domains for the same business
My client purchased over 500 URLs for targeting various customers and ranking for different keywords. It is for the same business though. What is the best strategy to deal with this kind of approach in your opinion. They use different meta data for each of the URLs starting with brand name in meta title. Are there any other points to keep in mind when developing strategy for all those URLs. Is this a good approach?
On-Page Optimization | | alicaomisem1 -
Url structure with dash or slash
Hi There We have a content website. We don't rank well category image related searches but we get quite good traffic for those keywords. Those keywords are mostly like "category images". We want to change our url structure and we have 2 options now. 1- domain.com/category/category-images 2-domain.com/category/images option 1 repeats the category name so it looks spammy option 2 doesn't really have the keyword. any ideas which one tho choose? Thanks! ps: we don't want to use domain.com/category-images (too many root link)
On-Page Optimization | | muminaydin0 -
How to Structure URL's for Multiple Locations
We are currently undergoing a site redesign and are trying to figure out the best way to structure the URL's and breadcrumbs for our many locations. We currently have 60 locations nationwide and our URL structure is as follows: www.mydomain.com/locations/{location} Where {location} is the specific street the location is on or the neighborhood the location is in. (i.e. www.mydomain.com/locations/waterford-lakes) The issue is, {location} is usually too specific and is not a broad enough keyword. The location "Waterford-Lakes" is in Orlando and "Orlando" is the important keyword, not " Waterford Lakes". To address this, we want to introduce state and city pages. Each state and city page would link to each location within that state or city (i.e. an Orlando page with links to "Waterford Lakes", "Lake Nona", "South Orlando", etc.). The question is how to structure this. Option 1 Use the our existing URL and breadcrumb structure (www.mydomain.com/locations/{location}) and add state and city pages outside the URL path: www.mydomain.com/{area} www.mydomain.com/{state} Option 2 Build the city and state pages into the URL and breadcrumb path: www.mydomain.com/locations/{state}/{area}/{location} (i.e www.mydomain.com/locations/fl/orlando/waterford-lakes) Any insight is much appreciated. Thanks!
On-Page Optimization | | uBreakiFix0 -
How do I remove a Canonical URL Tag?
Some of my report cards say I have too many canonical URL tags. However, there is no information no how to delete one. Can someone give me a link or explain? Thanks.
On-Page Optimization | | dealblogger0 -
Schema: Multiple Locations on a Single Page
Can adding multiple locations on a single page be confusing to Google? Is using "LocalBusiness" with "branchof" the proper way of doing this? Does anyone know of any resources that go into this type of thing in more detail? I've read everything Google, Schema and SeoMoz seem to have on this topic. Thanks.
On-Page Optimization | | Kwilder0 -
URL for location pages
Hello all We would like to create clean, easy URLs for our large list of Location pages. If there are a few URLs for each of the pages, am I right when I'm saying we would like this to be the canonical? Right now we would like the URL to be: For example
On-Page Optimization | | Ferguson
Domain.com/locations/Columbus I have found some instances where there might be 2,3 or more locations in the same city,zip. My conclusion for these would be: adding their Branch id's on to the URL
Domain.com/locations/Columbus/0304 Is this an okay approach? We are unsure if the URL should have city,State,zip for SEO purposes?
The pages will have all of this info in it's content
BUT what would be best for SEO and ranking for a given location? Thank you for any info!0 -
Prevent link juice to flow on low-value pages
Hello there! Most of the websites have links to low-value pages in their main navigation (header or footer)... thus, available through every other pages. I especially think about "Conditions of Use" or "Privacy Notice" pages, which have no value for SEO. What I would like, is to prevent link juice to flow into those pages... but still keep the links for visitors. What is the best way to achieve this? Put a rel="nofollow" attribute on those links? Put a "robots" meta tag containing "noindex,nofollow" on those pages? Put a "Disallow" for those pages in a "robots.txt" file? Use efficient Javascript links? (that crawlers won't be able to follow)
On-Page Optimization | | jonigunneweg0