GoogleBot Mobile & Depagination
-
I am building a new site for a client and we're discussing their inventory section. What I would like to accomplish is have all their products load on scroll (or swipe on mobile). I have seen suggestions to load all content in the background at once, and show it as they swipe, lazy loading the product images. This will work fine for the user, but what about how GoogleBot mobile crawls the page?
Will it simulate swiping? Will it load every product at once, killing page load times b/c of all of the images it must load at once? What are considered SEO best practices when loading inventory using this technique.
I worry about this b/c it's possible for 2,000+ results to be returned, and I don't want GoogleBot to try and load all those results at once (with their product thumbnail images). And I know you will say to break those products up into categories, etc. But I want the "swipe for more" experience. 99.9% of our users will click a category or filter the results, but if someone wants to swipe through all 2,000 items on the main inventory landing page, they can. I would rather have this option than "Page 1 of 350".
I like option #4 in this question, but not sure how Google will handle it.
I asked Matt Cutts to answer this, if you want to upvote this question.
https://www.google.com/moderator/#11/e=adbf4&u=CAIQwYCMnI6opfkj -
What you ideally want to do is set up the mobile site as a standard site. Then utilize javascript to call each page in an order defined by the users actions with dynamic loading.
This has two benefits:
-
SEO and SERP. The pages will be indexed as they should. If you have one huge page you are still limited to the 2 or 3 keywords as always. When you see a good infinite scroll website it is not one page, it only looks this way due to JavaScript calling additional pages at triggers that have been set.
-
No JavaScript graceful fallback (or fallforward as it is actually the native state). If you have one page, lazy loading with JavaScript and they do not support it then you have 2,000 pages worth of images loading at one time which is otherwise known as a bounce.
You will want to build out the site with no consideration to the infinite scrolling (except in design ie. tile-able backgrounds for a smooth non stop flow) then apply the script after you have a logical site structure using silo'ed categories. Google bot, Google bot mobile and users who do not have JavaScript will all have a useable site and the SERPS will rank pages as they should.
Tip: Keep any page wide bar or graphic styles in the header or the footer of the page. You will normally only call the content or article portion of the page to the infinite scroll so you have a non-stop flow on the site.
Hope this helps
I know your not using WordPress but I am assuming you are using some sort of templated PHP script for a 2K product store. This WP plugin is pretty easy to understand and what I first used to grab the concept. Also, if wanting to go a more Pinterest route look into Masonry JavaScript. http://www.infinite-scroll.com/
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
M.ExampleSite vs mobile.ExampleSite vs ExampleSite.com
Hi, I have a call with a potential client tomorrow where all I know is that they are wigged-out about canonicalization, indexing and architecture for their three sites: m.ExampleSite.com mobile.ExampleSite.com ExampleSite.com The sites are pretty large... 350k for the mobiles and 5 million for the main site. They're a retailer with endless products. They're main site is not mobile-responsive, which is evidently why they have the m and mobile sites. Why two, I don't know. This is how they currently hand this: What would you suggest they do about this? The most comprehensive fix would be making the main site mobile responsive and 301 the old mobile sub domains to the main site. That's probably too much work for them. So, what more would you suggest and why? Your thoughts? Best... Mike P.S., Beneath my hand-drawn portrait avatar above it says "Staff" at this moment, which I am not. Some kind of bug I guess.
Intermediate & Advanced SEO | | 945010 -
Mobile Site Panda 4.2 Penalty
We are an ecommerce company, and we outsource our mobile site to a service, and our mobile site is m.ourdomain.com. We pass the Google mobile ready test. Our product page content on the mobile site is woefully thin (typically less than 100 words), and it appears that we got hit with Panda 4.2 on the mobile site. Starting at the end of July, our mobile rankings have dropped, and our mobile traffic is now about half of what it was in July. We are working to correct the content issue but it obviously takes time. So here's my question - if our mobile site got hit with Panda 4.2, could that have a negative effect on our desktop site?
Intermediate & Advanced SEO | | AMHC0 -
Mobile Meta Descriptions
Hi we have a e-commerce site on Magento. A lot of the current current meta descriptions are over 120 characters, which is approximately what Google cuts off for mobile search. We want to create mobile meta descriptions but where would we add them to the CMS and how do we tell Google to use the mobile meta description when the site is responsive. Any suggestions would be very much appreciated! Thanks, Chris
Intermediate & Advanced SEO | | jayoliverwright0 -
Nofollow links & nofollow blog comments - Should I remove
Hello, One of my website has quite a lot (~1000) nofollow blog comment links. Is it worth getting them removed if they are nofollow, could they be dragging the metric of my website down. Does anyone have any experience of this? The site only has about 5 follow links, something seems to be dragging the domain metrics down. Thanks Rob
Intermediate & Advanced SEO | | tomfifteen0 -
Unpaid Followed Links & Canonical Links from Syndicated Content
I have a user of our syndicated content linking to our detailed source content. The content is being used across a set of related sites and driving good quality traffic. The issue is how they link and what it looks like. We have tens of thousands of new links showing up from more than a dozen domains, hundreds of sub-domains, but all coming from the same IP. The growth rate is exponential. The implementation was supposed to have canonical tags so Google could properly interpret the owner and not have duplicate syndicated content potentially outranking the source. The canonical are links are missing and the links to us are followed. While the links are not paid for, it looks bad to me. I have asked the vendor to no-follow the links and implement the agreed upon canonical tag. We have no warnings from Google, but I want to head that off and do the right thing. Is this the right approach? What would do and what would you you do while waiting on the site owner to make the fixes to reduce the possibility of penguin/google concerns? Blair
Intermediate & Advanced SEO | | BlairKuhnen0 -
Link Reclimation & Redirects
Hello, I'm in the middle of a link reclamation project wherein we're identifying broken links, links pointing to dupe content etc. I found a forgotten co-brand which is effectively dupe content across 8 sub-domains, some of which have a significant number of links (200+ linking domains | 2k+ in-bound links). Question for the group is what's the optimal redirect option? Option 1: set 301 and maintain 1:1 URL mapping will pass all equity to applicable PLPs and theoretically improve rank for related keyword(s). requires a bit more configuration time and will likely have small effect on rank given links are widely distributed across URLs. Option 2: set 301 to redirect all requests to the associated sub-domain e.g. foo.mybrand.cobrand.com/page1.html and foo.mybrand.cobrand.com/page2 both redirect to foo.mybrand.com/ will accumulate all equity at the sub-domain level which theoretically will be roughly distributed throughout underlying pages and will limit risk of penalty to that sub-domain. Option 3: set 301 to redirect all requests to our homepage. easiest to configure & maintain, will accumulate the maximum equity on a priority page which should positively affect domain authority. run risk of being penalized for accumulating links en mass, risk penalty for spammy links on our primary sub-domain www, won't pass keyword specific equity to applicable pages. To be clear, I've done an initial scrub of anchor text and there were no signs of spam. I'm leaning towards #3, but interested in others perspectives. Cheers,
Intermediate & Advanced SEO | | PCampolo
Stefan0 -
Redirection strategy for mobile site
Hello folks! I am just about to launch a mobile specific version of our website. We were not able to make the main site responsive so have decided to make a seperate copy on an m dot subdomain. I have kept the url structure identical between both sites and added a canonical url on the mobile pages pointing to the desktop site. I will detect and redirect all mobile devices and googlebot mobile crawler to the m dot site. The questions i have are as follows... Is that the best approach if you use a mobile specific site on a seperate subdomain? What type of redirects should i use to send mobile users (and googlebot mobile) to the mobile site? My mobile site does not have all the pages the desktop site has. What happens if i redirect a mobile user from a page on the desktop site to a page on the mobile site that does not exist? (will give 404 currently). I guess i could maintain a list of valid mobile urls but this would be a pain (and a bit of an overhead) Your help is most appreciated Regards
Intermediate & Advanced SEO | | RobertHill0 -
Googlebot crawling partial URLs
Hi guys, I've checked my email this morning and I've got a number of 404 errors over the weekend where Google has tried to crawl some of my existing pages but not found the full URL. Instead of hitting 'domain.com/folder/complete-pagename.php' it's hit 'domain.com/folder/comp'. This is definitely Googlebot/2.1; http://www.google.com/bot.html (66.249.72.53) but I can't find where it would have found only the partial URL. It certainly wasn't on the domain it's crawling and I can't find any links from external sites pointing to us with the incorrect URL. GoogleBot is doing the same thing across a single domain but in different sub-folders. Having checked Webmaster Tools there aren't any hard 404s and the soft ones aren't related and haven't occured since August. I'm really confused as to how this is happening.. Thanks!
Intermediate & Advanced SEO | | panini0