How does Infinite Scrolling work with unique URLS as users scroll down? And is this SEO friendly?
-
I was on a site today and as i scrolled down and viewed the other posts that were below the top one i read, i noticed that each post below the top one had its own unique URL. I have not seen this and was curious if this method of infinite scrolling is SEO friendly. Will Google's spiders scroll down and index these posts below the top one and index them? The URLs of these lower posts by the way were the same URLs that would be seen if i clicked on each of these posts. Looking at Google's preferred method for Infinite scrolling they recommend something different - https://webmasters.googleblog.com/2014/02/infinite-scroll-search-friendly.html .
Welcome all insight. Thanks!
Christian
-
Thx again!!
-
Yes! You asked "So if I understand correctly then Google will index just the 1st post then?" and there's no way of guaranteeing what Google will or won't do. But that is probably what will happen.
-
each of the lower posts does have its own URL. As you noted above, that unique URL does show up as the user scrolls lower, but there are links to these URLs from main nav too.
-
Google will probably only count the content of the first post (or however much content displays at initial page load time) when ranking and indexing that infinite-scroll page, yes, so if you want the rest of that content in the index I'd give it its own URLs. However, Google is getting better at JavaScript and is always unpredictable, so it's not beyond the realm of possibility that it would index more content from the infinite scroll page than initially loads - don't be too surprised if you see that, but I wouldn't count on it.
-
Thanks Ruth! Greatly appreciate your help.
So if I understand correctly then Google will index just the 1st post then? Since the lower posts all have their own unique urls then Google will just index those as it crawls I assume (of course it's always wise to have a site map).
-
Hi Christian,
What you're seeing is exactly what Google recommends for infinite scroll in the resource you link to. It breaks the page up into component resources (separate URLs) each of which could be accessed on its own. Their examples use dynamic parameters to break up into e.g. page=2, but if your infinite- or long-scrolling page isn't paginated content, there's no reason why each component couldn't have its own URL that is accessed as you scroll down.
I actually really like this method as a compromise between the "one long page with all the information on it" approach to web design and the "landing pages for people looking for specific bits of information" approach to SEO. For example, I often have SAAS clients who want all the information about what their product does to be one one long page. This is great for people who want to research the whole product at once, but makes it hard for me to optimize for keywords pertaining to individual features of the product. The solution is to have separate landing pages that talk about specific features, all linked together in one "product" page that scrolls using the methodology outlined in the Google resource you linked to. Plus, it means that people who are just looking for that one feature arrive on a page that's about that feature, instead of having to scroll to find what they're looking for.
With the infinite scroll situation, Google is only usually going to crawl and index what is available to the user before more of the page loads - so if you want Google to crawl and index all of the content on your infinite-scroll page, this is the way to do it. It's also better for users who don't have JavaScript enabled. I hope that makes sense and let me know if you have more questions!
-
Check pymnts.com
-
I regret I have not understood the question, what do you mean with "unique urls"? Can you post a link to show that website?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Javascript and SEO
I've done a bit of reading and I'm having difficulty grasping it. Can someone explain it to me in simple language? What I've gotten so far: Javascript can block search engine bots from fully rendering your website. If bots are unable to render your website, it may not be able to see important content and discount these content from their index. To know if bots could render your site, check the following: Google Search Console Fetch and Render Turn off Javascript on your browser and see if there are any site elements shown or did some disappear Use an online tool Technical SEO Fetch and Render Screaming Frog's Rendered Page GTMetrix results: if it has a Defer parsing of Javascript as a recommendation, that means there are elements being blocked from rendering (???) Using our own site as an example, I ran our site through all the tests listed above. Results: Google Search Console: Rendered only the header image and text. Anything below wasn't rendered. The resources googlebot couldn't reach include Google Ad Services, Facebook, Twitter, Our Call Tracker and Sumo. All "Low" or blank severity. Turn off Javascript: Shows only the logo and navigation menu. Anything below didn't render/appear. Technical SEO Fetch and Render: Our page rendered fully on Googlebot and Googlebot Mobile. Screaming Frog: The Rendered Page tab is blank. It says 'No Data'. GTMetrix Results: Defer parsing of JavaScript was recommended. From all these results and across all the tools I used, how do I know what needs fixing? Some tests didn't render our site fully while some did. With varying results, I'm not sure where to from here.
Intermediate & Advanced SEO | | nhhernandez1 -
How Does Yelp Create URLs?
Hi all, How does Yelp (or other sites) go about creating URLs for just about every service and city possible ending with the search? in the URL like this https://www.yelp.com/search?cflt=chiropractors&find_loc=West+Palm+Beach%2C+FL. They clearly aren't creating all of these pages, so how do you go about setting a meta title/optimization formula that allows these pages to exist AND to be crawled by search engines and indexed?
Intermediate & Advanced SEO | | RickyShockley0 -
How much is the effect of redirecting an old URL to another URL under a new domain?
Example: http://www.olddomain.com/buy/product-type/region/city/area http://www.newdomain.com/product-type-for-sale/city/area Thanks in advance!
Intermediate & Advanced SEO | | esiow20130 -
International SEO
Hi all, The company that I work for is planning to target some french (and some other foreign) keywords. The thing is, in our industry, you can't just hire someone to translate the content/pages. The pages have to be translated by an accredited translator. Here's the thing, it costs a LOT of money just to translate a few thousand words. So, the CEO decided to translate a few of our 'core' pages and SEO them to see if it brings results. My questions are, would it be possible from a technical point of view to simply translate a few pages? Would that cause a problem for the search engine crawlers? Would those pages be 'seen' as duplicates? Thanks in advance guys!
Intermediate & Advanced SEO | | EdwardDennis0 -
Canonical URLs and Sitemaps
We are using canonical link tags for product pages in a scenario where the URLs on the site contain category names, and the canonical URL points to a URL which does not contain the category names. So, the product page on the site is like www.example.com/clothes/skirts/skater-skirt-12345, and also like www.example.com/sale/clearance/skater-skirt-12345 in another category. And on both of these pages, the canonical link tag references a 3rd URL like www.example.com/skater-skirt-12345. This 3rd URL, used in the canonical link tag is a valid page, and displays the same content as the other two versions, but there are no actual links to this generic version anywhere on the site (nor external). Questions: 1. Does the generic URL referenced in the canonical link also need to be included as on-page links somewhere in the crawled navigation of the site, or is it okay to be just a valid URL not linked anywhere except for the canonical tags? 2. In our sitemap, is it okay to reference the non-canonical URLs, or does the sitemap have to reference only the canonical URL? In our case, the sitemap points to yet a 3rd variation of the URL, like www.example.com/product.jsp?productID=12345. This page retrieves the same content as the others, and includes a canonical link tag back to www.example.com/skater-skirt-12345. Is this a valid approach, or should we revise the sitemap to point to either the category-specific links or the canonical links?
Intermediate & Advanced SEO | | 379seo0 -
SEO-Friendly Method to Load XML Content onto Page
I have a client who has about 100 portfolio entries, each with its own HTML page. Those pages aren't getting indexed because of the way the main portfolio menu page works: It uses javascript to load the list of portfolio entries from an XML file along with metadata about each entry. Because it uses javascript, crawlers aren't seeing anything on the portfolio menu page. Here's a sample of the javascript used, this is one of many more lines of code: // load project xml try{ var req = new Request({ method: 'get', url: '/data/projects.xml', Normally I'd have them just manually add entries to the portfolio menu page, but part of the metadata that's getting loaded is project characteristics that are used to filter which portfolio entries are shown on page, such as client type (government, education, industrial, residential, industrial, etc.) and project type (depending on type of service that was provided). It's similar to filtering you'd see on an e-commerce site. This has to stay, so the page needs to remain dynamic. I'm trying to summarize the alternate methods they could use to load that content onto the page instead of javascript (I assume that server side solutions are the only ones I'd want, unless there's another option I'm unaware of). I'm aware that PHP could probably load all of their portfolio entries in the XML file on the server side. I'd like to get some recommendations on other possible solutions. Please feel free to ask any clarifying questions. Thanks!
Intermediate & Advanced SEO | | KaneJamison0 -
Url with hypen or.co?
Given a choice, for your #1 keyword, would you pick a .com with one or two hypens? (chicago-real-estate.com) or a .co with the full name as the url (chicagorealestate.co)? Is there an accepted best practice regarding hypenated urls and/or decent results regarding the effectiveness of the.co? Thank you in advance!
Intermediate & Advanced SEO | | joechicago0 -
URL Length or Exact Breadcrumb Navigation URL? What's More Important
Basically my question is as follows, what's better: www.romancingdiamonds.com/gemstone-rings/amethyst-rings/purple-amethyst-ring-14k-white-gold (this would fully match the breadcrumbs). or www.romancingdiamonds.com/amethyst-rings/purple-amethyst-ring-14k-white-gold (cutting out the first level folder to keep the url shorter and the important keywords are closer to the root domain). In this question http://www.seomoz.org/qa/discuss/37982/url-length-vs-url-keywords I was consulted to drop a folder in my url because it may be to long. That's why I'm hesitant to keep the bradcrumb structure the same. To the best of your knowldege do you think it's best to drop a folder in the URL to keep it shorter and sweeter, or to have a longer URL and have it match the breadcrumb structure? Please advise, Shawn
Intermediate & Advanced SEO | | Romancing0