Do internal links from non-indexed pages matter?
-
Hi everybody! Here's my question.
After a site migration, a client has seen a big drop in rankings. We're trying to narrow down the issue. It seems that they have lost around 15,000 links following the switch, but these came from pages that were blocked in the robots.txt file. I was wondering if there was any research that has been done on the impact of internal links from no-indexed pages.
Would be great to hear your thoughts!
Sam
-
I assume these are pretty deep in the site structure, so I don't think those "links" being reported are very powerful or important. Some people claim that, since PageRank is recursive, you don't want to cut off paths, but when the paths are deep I've rarely seen any evidence to support this. A big, bloated index full of thin content, especially content available on other sites, is a much bigger danger.
I would not recommend using both a NOINDEX and a rel=canonical on these pages. It's a mixed signal, and that can cause Google to ignore one or both signals (and at their choosing, not yours). I think NOINDEX is fine here. I've built structures like this for things like event websites (where we index the main event but NOINDEX all of the cities/dates, because they change so often) and have never seen any major issues. Actually, in one notable case, even before Panda came along, the site's rankings improved measurably.
-
Hi Pete! Sorry about the delay.
The site is https://www.holidayhypermarket.co.uk/, and the non-indexed pages are products such as:
These are noindexed as they tend to have syndicated content.
Thanks!
-
Blocked pages are generally not going to pass internal link equity, but the impact of this depends a lot on your site structure. If these were deep pages at the end of paths and your site nav covers major/ranking pages, it shouldn't matter too much. If these pages were in the middle of paths, you could be causing serious problems.
There's also the question of whether these pages themselves (the blocked ones) were getting inbound links or were themselves ranking for some of these terms.
Unfortunately, at this scope, it's really hard to speak in generalities. Can you give us a sense of what these pages are and why they were blocked? How large is the site overall?
-
Hi Sam,
If the pages that you are talking have been blocked by robots.txt I do not think they would be in any way beneficial. In our case (because of a development made back in 2009 - which still wasn't corrected) we have pages that are noindex, follow. And I have seen that some anchor texts used for internal linking still bring value to the landing pages.
I hope this helped, Keszi
-
Hi,
I can't say about any research has been done on this topic or not. First I would like to quote whatt moz says about internal linking "Internal links are most useful for establishing site architecture and spreading link juice (URLs are also essential)."
I would like to break into two parts
1> If page/pages linked from blocked pages it means crawler won't find linked pages because pages are blocked from robots.txt this hinders their ability to get pages listed in the search engines' indices. I presume these pages blocked in robots.txt before migration so this could not be reason
2> Link Juice won't flow because it is blocked & it is blocked earlier too (before migration) so this also could not be the reason.
*** During migration website does lose ranking if website does not properly redirected so please check whether you followed best practice for migration or not by checking below URL
http://moz.com/blog/web-site-migration-guide-tips-for-seos
Thanks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What Are Internal Linking Best Practices For Blogs?
We have a blog for our e-commerce site. We are posting about 4-5 blog posts a month, most of them 1500+ words. Within the content, we have around 10-20 links pointing out to other blog posts or products/categories on our site. Except for the products/categories, the links use non-optimized generic anchor text (i.e guide, sizing tips, planning resource). Are there any issues or problems as far as SEO with this practice? Thank You
Intermediate & Advanced SEO | | kekepeche0 -
Best practice to prevent pages from being indexed?
Generally speaking, is it better to use robots.txt or rel=noindex to prevent duplicate pages from being indexed?
Intermediate & Advanced SEO | | TheaterMania0 -
Why isn't my uneven link flow among index pages causing uneven search traffic?
I'm working with a site that has millions of pages. The link flow through index pages is atrocious, such that for the letter A (for example) the index page A/1.html has a page authority of 25 and the next pages drop until A/70.html (the last index page listing pages that start with A) has a page authority of just 1. However, the pages linked to from the low page authority index pages (that is, the pages whose second letter is at the end of the alphabet) get just as much traffic as the pages linked to from A/1.html (the pages whose second letter is A or B). The site gets a lot of traffic and has a lot of pages, so this is not just a statistical biip. The evidence is overwhelming that the pages from the low authority index pages are getting just as much traffic as those getting traffic from the high authority index pages. Why is this? Should I "fix" the bad link flow problem if traffic patterns indicate there's no problem? Is this hurting me in some other way? Thanks
Intermediate & Advanced SEO | | GilReich0 -
No index.no follow certain pages
Hi, I want to stop Google et al from finding a some pages within my website. the url is www.mywebsite.com/call_backrequest.php?rid=14 As these pages are creating a lot of duplicate content issues. Would the easiest solution be to place a 'Nofollow/Noindex' META tag in page www.mywebsite.com/call_backrequest.php many thanks in advance
Intermediate & Advanced SEO | | wood1e19680 -
How can you indexed pages or content on pages that are behind a pay wall or subscription login.
I have a client that has a boat of awesome content they provide to their client that's behind a pay wall ( ie: paid subscribers can only access ) Any suggestions mozzers? How do I get those pages index? Without completely giving away the contents in the front end.
Intermediate & Advanced SEO | | BizDetox0 -
Google+ Personal Page pass link juice?
I noticed recently that a clients google plus business page (Set up as a personal page) has a followed link pointing to their site. They have many links on the web pointing to the google+ page, however that page is an https page. So the question is, would a google+ page that is https still pass authority and link juice to the site linked in the about us tab?
Intermediate & Advanced SEO | | iAnalyst.com0 -
Linking to local pages on main page - keyword self-cannibalization issue?
Hi guys, Our website has this landing page: www.example.com/service1/ Is this considered keyword self-cannibalization if on the above page we link to local pages such as: www.example.com/service1-in-chicago/ www.example.com/service1-in-newyork/ www.example.com/service1-in-texas/ Many thanks David
Intermediate & Advanced SEO | | sssrpm0 -
Google consolidating link juice on duplicate content pages
I've observed some strange findings on a website I am diagnosing and it has led me to a possible theory that seems to fly in the face of a lot of thinking: My theory is:
Intermediate & Advanced SEO | | James77
When google see's several duplicate content pages on a website, and decides to just show one version of the page, it at the same time agrigates the link juice pointing to all the duplicate pages, and ranks the 1 duplicate content page it decides to show as if all the link juice pointing to the duplicate versions were pointing to the 1 version. EG
Link X -> Duplicate Page A
Link Y -> Duplicate Page B Google decides Duplicate Page A is the one that is most important and applies the following formula to decide its rank. Link X + Link Y (Minus some dampening factor) -> Page A I came up with the idea after I seem to have reverse engineered this - IE the website I was trying to sort out for a client had this duplicate content, issue, so we decided to put unique content on Page A and Page B (not just one page like this but many). Bizarrely after about a week, all the Page A's dropped in rankings - indicating a possibility that the old link consolidation, may have been re-correctly associated with the two pages, so now Page A would only be getting Link Value X. Has anyone got any test/analysis to support or refute this??0