Solutions for too many on-page links?
-
We have just begun using SEO Moz a few months ago and have been busy cleaning up some of our warnings and errors. One of the errors that has been an issue is ... too many on-page links. I am trying to correct this issue and I am wondering how seo moz counts these links. For instance... we have links to many of our product categories in a drop down from our main menu, those same links are listed in our footer. Does this get counted as two or only one link. If two, should we make one of the link no follow or how would you best suggest correcting this. Our website is www.unikeyhealth.com
Since the menu and the footer appear on virtually every page on our site correcting this issue will quickly sort out this problem. Thanks for any advice.
-
Thanks Takeshi, you make some valid points. Makes sense to me.
-
The PageRank of a page is split between every link on a page. That means that every link on a page lowers the value of every other link on the page (from an SEO perspective). Having more links also makes it more difficult for Google to crawl your site.
The general recommendation is 100 links per page. I counted 177 on your site. 100 is not a hard rule, but generally you want to keep that number fairly low.
When counting links, every "a href" is counted as a separate link, regardless if the destination is the same page. So having the same link in the footer and header counts as 2 links. Nofollowing your links will not do anything to solve this problem, only physically removing your links will.
Now, having a lot of links is not the end of the world, but the question you want to ask is whether having all those links is actually helping your users. I would invest in a user tracking tool such as CrazyEgg or ClickTale, and see how people are actually using your site.
Do people really select "Osteo-Key" from your massive drop-down menu? Maybe the better experience would be to just present the top-level categories, so that people aren't overwhelmed by all the options. How many people actually click the 40+ links you have in your footer? If it's a small amount, then removing them is probably a better user experience and will help SEO as well. Also, having 200 links on a product page is overkill.
From a conversion optimization standpoint, presenting the users with as few options as possible while giving them the capability to find anything they're looking for results in the least confusion and the most conversions. Usable web design lies in finding that balance.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages not indexable?
Hello, I've been trying to find out why Google Search Console finds these pages non-indexable: https://www.visitflorida.com/en-us/eat-drink.html https://www.visitflorida.com/en-us/florida-beaches/beach-finder.html Moz and SEMrush both crawl the pages and show no errors but GSC comes back with, "blocked by robots.txt" but I've confirmed it is not. Anyone have any thoughts? 6AYn1TL
Technical SEO | | KenSchaefer0 -
Why are only a few of our pages being indexed
Recently rebuilt a site for an auctioneers, however it has a problem in that none of the lots and auctions are being indexed by Google on the new site, only the pages like About, FAQ, home, contact. Checking WMT shows that Google has crawled all the pages, and I've done a "Fetch as Google" on them and it loads up fine, so there's no crawling issues that is standing out. I've set the "URL Parameters" to no effect too. Also built a sitemap with all the lots in, pushed to Google which then crawled them all (massive spike in Crawl rate for a couple days), and still just indexing a handful of pages. Any clues to look into would be greatly appreciated. https://www.wilkinsons-auctioneers.co.uk/auctions/
Technical SEO | | Blue-shark0 -
Too Many Links?
Search Term is Indianapolis Wedding Photographers. Site is http://www.tallandsmallphotography.com/ Their metrics are through the roof compared to everyone else's. They've dropped from 27 in May to 40 Now. 'A' Grade on-site optimization. Either there's too many links, or there's some bad links involved... I don't know which it is...
Technical SEO | | WilliamBay0 -
Page not cached
Hi there, we uploaded a page but unfortunately didn't realise it had noindex,nofollow in the meta tags. Google had cached it then decached it (i guess thats possible) it seems? now it will not cache even though the correct meta tags have been put in and we have sent links to it internally and externally. Anyone know why this page isn't being cached, the internal link to it is on the homepage and that gets cached almost every day. I even submitted it to webmaster tools to index.
Technical SEO | | pauledwards0 -
How to verify a page-by-page level 301 redirect was done correctly?
Hello, I told some tech guys to do a page-by-page relevant 301 redirect (as talked about in Matt Cutts video https://www.youtube.com/watch?v=r1lVPrYoBkA) when a company wanted to move to a new domain when their site was getting redesigned. I found out they did a 302 redirect on accident and had to fix that, so now I don't trust they did the page-by-page relevant redirect. I have a feeling they just redirected all of the pages on the old domain to the homepage of the new domain. How could I confirm this suspicion? I run the old domain through screaming frog and it only shows 1 URL - the homepage. Does that mean they took all of the pages on the old domain offline? Thanks!
Technical SEO | | EvolveCreative0 -
Do you get credit for an external link that points to a page that's being blocked by robots.txt
Hi folks, No one, including me seems to actually know what happens!? To repeat: If site A links to /home.html on site B and site B blocks /home.html in Robots.txt, does site B get credit for that link? Does the link pass PageRank? Will Google still crawl through it? Does the domain get some juice, but not the page? I know there's other ways of doing this properly, but it is interesting no?
Technical SEO | | DaveSottimano0 -
Page not Accesible for crawler in on-page report
Hi All, We started using SEOMoz this week and ran into an issue regarding the crawler access in the on-page report module. The attached screen shot shows that the HTTP status is 200 but SEOMoz still says that the page is not accessible for crawlers. What could this be? Page in question
Technical SEO | | TiasNimbas
http://www.tiasnimbas.edu/Executive_MBA/pgeId=307 Regards, Coen SEOMoz.png0 -
SEO LINKS
New to S.E.O. so excuse my naivety. I have made lots of new links some of them paid for e.g. Best of the Web but I don’t see any change in the latest competitive link analysis. Some of the links we have been accepted for just do not show. Also the keywords we are trying to promote the most have disappeared off the radar for over 2 weeks now. I think we have followed the optimization suggestions correctly. Please could you enlighten me. Regards Paul www.curtainpolesemporium.co.uk
Technical SEO | | CPE0