Trading longer load time for greater link potential
-
How do you approach balancing load time with making a page linkworthy?
For example, if you create a page with a lot of rich media like infographics, images, videos, and audio, this will probably increase page load time but it will be more likely to earn links.
-
I put everything that I have into the page.
Six nice photos, two videos, a couple of charts and 3000 words - plus ads.
Be sure that you optimize the photos and have a good server with caching.
-
Like Sean said, people have no patience on the web. Load time is important across the board for users _and _search...not just search. Something that takes forever to load probably won't be link worthy (or share worthy) as a result and even if you do get links or share, it may not rank as well either because of the slow load.
You probably should rethink your content or design and strip it down to the basics. But also do whatever you can to speed up a feature-rich page. With all the services available (most for free or low cost), there are lots of ways to speed up your pages.
If it is really image heavy, something like jQuery lazy load (http://www.appelsiini.net/projects/lazyload) is awesome. If you are using a lot of jQuery or other dynamic libraries, pull in the content from Google's libraries (https://developers.google.com/speed/libraries/devguide) to take advantage of caching. If you have a lot of custom JS, minify it (http://www.minifyjs.com/javascript-compressor/). Also, I've had luck with Cloudflare (http://cloudflare.com) as well in speeding up really feature-rich pages.
Hope that helps.
-
Would rethink the design of the page. Are there any better ways to design the page? Could you leverage lightbox? Screenshot the videos and have them target=blank? Granted not every solution is ideal but if you must cram a ton of things on a page that will cause insane load time you also have to consider the fact of if it is a long time period, it will tick people off and thus not be as link worthy as you may think.
People on the internet have practically 0 patience...just remember that when coming up with your plan.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I redirect a link even if the link is still on the site
Hi Folks, I've got a client who has a duplicate content because they actually create duplicate content and store the same piece of content in 2 different places. When they generate this duplicate content, it creates a 2nd link on the site going to the duplicate content. Now they want the 2nd link to always redirect to the first link, but for architecture reasons, they can't remove the 2nd link from the site navigation. We can't use rel-canonical because they don't want visitors going to that 2nd page. Here is my question: Are there any adverse SEO implications to maintaining a link on a site that always redirects to a different page? I've already gone down the road of "don't deliberately create duplicate content" with the client. They've heard me, but won't change. So, what are your thoughts? Thanks!
Technical SEO | | Rock330 -
Helping finding a link
Hi So Ive done a crawl of the site using screaming frog. There are a few old category and sub category pages which don't exist any more but somehow the crawler is finding them. An example is below: http://www.ebuyer.com/store/Home-Appliances/cat/Health-&-Beauty/subcat/Male-Grooming Just wondering if anybody had any ideas about how I could go and find these urls and remove them off the site. Any ideas would be really appreciated. Thanks Andy
Technical SEO | | Andy-Halliday0 -
301 and 302 for same link
Just trying to find out if this may be the root of a slight traffic dip and also if we should be redirecting differently. We relaunched and did 301 redirects to the new site, initially. Then, we decided to change from http to https. Our HTTP status now looks like this when using the MozBar: HTTP/1.1 301 Moved Permanently – http://site.com/oldurl
Technical SEO | | MichaelEka
HTTP/1.1 302 Found – https://site.com/oldurl
HTTP/1.1 200 OK - https://site.com/new Should we be changing that 302 to a 301? Are we losing link equity due to this? Thanks.0 -
Spam link? Links from linguee
Hi Everyone My site received a notification of unnatural links in Webmaster Tools and the site has had a penalty applied. I can see there are a lot of links from a site : linguee.com .de. nl. ect ..more than 30k of them! I am not sure where did those links come from! The suddenly appeared over the weekend. Does anyone has similar experience before and any suggestion? Thanks Ricky
Technical SEO | | SEO-SMB0 -
How does your crawler treat ajax links?
Hello! It looks like the seomoz crawler (and google) follows ajax links. Is this normal behavior? We have implemented the canonical element and that seems to resolve most of the duplicate content issues. Anything else we can do? Example: Krom
Technical SEO | | AJPro0 -
Advice on too many onpage links
Hi Just done a 250 crawl on a new site I am working on (still under development), all 250 pages seem to have too many on page links, however they do not have any links I can take away This page, for example, http://empleous.com/gb/store/category/398743/shoes?price=20-50 has (according to moz crawl) 252 links on. Seems a little high. What would be the best way to correct this please? I cannot find that many links. I know there are about 85 links in the menu bar but they are all needed and none of the others can really be replaced either. Thanks Carl
Technical SEO | | Grumpy_Carl0 -
Canonical Link for Duplicate Content
A client of ours uses some unique keyword tracking for their landing pages where they append certain metrics in a query string, and pulls that information out dynamically to learn more about their traffic (kind of like Google's UTM tracking). Non-the-less these query strings are now being indexed as separate pages in Google and Yahoo and are being flagged as duplicate content/title tags by the SEOmoz tools. For example: Base Page: www.domain.com/page.html
Technical SEO | | kchandler
Tracking: www.domain.com/page.html?keyword=keyword#source=source Now both of these are being indexed even though it is only one page. So i suggested placing an canonical link tag in the header point back to the base page to start discrediting the tracking URLs: But this means that the base pages will be pointing to themselves as well, would that be an issue? Is their a better way to solve this issue without removing the query tracking all togther? Thanks - Kyle Chandler0 -
Link Juice flow control
Hi, I'm really interested in a good explanation of how to control the flow of link juice. Most of my inbound links currently go to my home page, and I was just wondering how to maximise the link juice flow to those pages that I want ranking. Is there any benefit to nofollowing pages in my navigation that I don't need to rank? As above, but with links in my footer, such as privacy policy and the like (can i not waste link juice on these pages?) Does more link juice flow to pages higher up in my code? These are just some of the questions I'm concerned with. Basically, I'd really like to know the best-practices for sending link juice to where it is needed most. Thanks, Matt - No Yelling Driving School
Technical SEO | | strilliams0