GoogleBot Mobile & Depagination
-
I am building a new site for a client and we're discussing their inventory section. What I would like to accomplish is have all their products load on scroll (or swipe on mobile). I have seen suggestions to load all content in the background at once, and show it as they swipe, lazy loading the product images. This will work fine for the user, but what about how GoogleBot mobile crawls the page?
Will it simulate swiping? Will it load every product at once, killing page load times b/c of all of the images it must load at once? What are considered SEO best practices when loading inventory using this technique.
I worry about this b/c it's possible for 2,000+ results to be returned, and I don't want GoogleBot to try and load all those results at once (with their product thumbnail images). And I know you will say to break those products up into categories, etc. But I want the "swipe for more" experience. 99.9% of our users will click a category or filter the results, but if someone wants to swipe through all 2,000 items on the main inventory landing page, they can. I would rather have this option than "Page 1 of 350".
I like option #4 in this question, but not sure how Google will handle it.
I asked Matt Cutts to answer this, if you want to upvote this question.
https://www.google.com/moderator/#11/e=adbf4&u=CAIQwYCMnI6opfkj -
What you ideally want to do is set up the mobile site as a standard site. Then utilize javascript to call each page in an order defined by the users actions with dynamic loading.
This has two benefits:
-
SEO and SERP. The pages will be indexed as they should. If you have one huge page you are still limited to the 2 or 3 keywords as always. When you see a good infinite scroll website it is not one page, it only looks this way due to JavaScript calling additional pages at triggers that have been set.
-
No JavaScript graceful fallback (or fallforward as it is actually the native state). If you have one page, lazy loading with JavaScript and they do not support it then you have 2,000 pages worth of images loading at one time which is otherwise known as a bounce.
You will want to build out the site with no consideration to the infinite scrolling (except in design ie. tile-able backgrounds for a smooth non stop flow) then apply the script after you have a logical site structure using silo'ed categories. Google bot, Google bot mobile and users who do not have JavaScript will all have a useable site and the SERPS will rank pages as they should.
Tip: Keep any page wide bar or graphic styles in the header or the footer of the page. You will normally only call the content or article portion of the page to the infinite scroll so you have a non-stop flow on the site.
Hope this helps
I know your not using WordPress but I am assuming you are using some sort of templated PHP script for a 2K product store. This WP plugin is pretty easy to understand and what I first used to grab the concept. Also, if wanting to go a more Pinterest route look into Masonry JavaScript. http://www.infinite-scroll.com/
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Redirecting old mobile site
Hi All, Trying to figure out the best option here. I have a website that used to utilize a separate mobile site (m.xyz.com) but now utilizes responsive design. What is the best way to deal with that old mobile site? De-index? 301 redirect back to the main site in the rare case someone finds the m. site somewhere? THanks! Ricky
Intermediate & Advanced SEO | | RickyShockley0 -
Question about robots file on mobile devices
Hi We have a robots.txt file, but do I need to create a separate file for the m.site or can I just add the line into my normal robots file. Ive just read the Google Guidelines (what a great read it was) and couldn't find my answer. Thanks in Advance Andy
Intermediate & Advanced SEO | | Andy-Halliday0 -
A few important mobile SEO questions
I have a few basic questions about mobile SEO. I'd appreciate if any of you fabulous Mozzers can enlighten me. Our site has a parallel mobile site with the same urls, using an m. domain for mobile and www. for desktop. On mobile pages, we have a rel="canonical" tag pointing to the matching desktop URL and on desktop pages we have a rel="alternate" tag pointing to the matching mobile URL. When someone visits a www. page using a mobile device, we 301 them to the mobile version. Questions: 1. Do I want my mobile pages to be indexed by Google? From Tom's (very helpful) answers here, it seems that I only want Google indexing the full site pages and if the mobile pages are indexed it's actually a duplicate content issue. This is really confusing to me since Google knows that it's not duplicate content based on the canonical tag. But - he makes a good point - what is the value of having the mobile page indexed if the same page on desktop is indexed (I know that Google is indexing both because I see them in search results. When I search on mobile Google serves the mobile page and when I search on desktop Google serves me the desktop page.)? Are these pages competing with each other? Currently, we are doing everything we can do ensure that our mobile pages are crawled (deeply) and indexed, but now I'm not sure what the value of this is? Please share your knowledge. 2. Is a mobile page's ranking affected by social shares of the desktop version of the same page? Currently, when someone uses the share buttons on our mobile site, we share the desktop url (www. - not m.). The reason we do this is that we are afraid that if people are sharing our content with 2 different url's (m.mysite.com/some_post and www.mysite.com/some_post) the share count will not be aggregated for both url's. What I'm wondering is: will this have a negative effect on mobile SEO, since it will seem to Google that our mobile pages have no shares, or is this not a problem, since the desktop pages have a rel="alternate" tag pointing to mobile pages, so Google gives the same ranking to the mobile page as the desktop page (which IS being shared)?
Intermediate & Advanced SEO | | YairSpolter0 -
Disavow Links & Paid Link Removal (discussion)
Hey everyone, We've been talking about this issue a bit over the last week in our office, I wanted to extend the idea out to the Moz community and see if anyone has some additional perspective on the issue. Let me break-down the scenario: We're in the process of cleaning-up the link profile for a new client, which contains many low quality SEO-directory links placed by a previous vendor. Recently, we made a connection to a webmaster who controls a huge directory network. This person found 100+ links to our client's site on their network and wants $5/link to have them removed. Client was not hit with a manual penalty, this clean-up could be considered proactive, but an algorithmic 'penalty' is suspected based on historical keyword rankings. **The Issue: **We can pay this ninja $800+ to have him/her remove the links from his directory network, and hope it does the trick. When talking about scaling this tactic, we run into some ridiculously high numbers when you talk about providing this service to multiple clients. **The Silver Lining: **Disavow Links file. I'm curious what the effectiveness of creating this around the 100+ directory links could be, especially since the client hasn't been slapped with a manual penalty. The Debate: Is putting a disavow file together a better alternative to paying for crappy links to be removed? Are we actually solving the bad link problem by disavowing or just patching it? Would choosing not to pay ridiculous fees and submitting a disavow file for these links be considered a "good faith effort" in Google's eyes (especially considering there has been no manual penalty assessed)?
Intermediate & Advanced SEO | | Etna0 -
"Starting Over" With A New Domain & 301 Redirect
Hello, SEO Gurus. A client of mine appears to have been hit on a non-manual/algorithm penalty. The penalty appears to be Penguin-like, and the client never received any message (not that that means it wasn't manual). Prior to my working with her, she engaged in all kinds of SEO fornication: spammy links on link farms, shoddy article marketing, blog comment spam -- you name it. There are simply too many tens of thousands of these links to have removed. I've done some disavowal, but again, so much of the link work is spam. She is about to launch a new site, and I am tempted to simply encourage her to buy a new domain and start over. She competes in a niche B2B sector, so it is not terribly competitive, and with solid content and link earning, I think she'd be ok. Here's my question: If we were to 301 the old website to the new one, would the flow of page rank outperform any penalty associated with the site? (The old domain only has a PR of 2). Anyone like my idea of starting over, rather than trying to "recover?" I thank you all in advance for your time and attention. I don't take it for granted.
Intermediate & Advanced SEO | | RCNOnlineMarketing0 -
Read More & SEO
I have just had my site redesigned. The site was designed with only important facts bullets at the top of the page and all other information is below in the read more section that expands when clicked. I am wondering if I need to have this information in the read more section visible to the customer or if having the majority of the text in the read more is OK? and how it will effect rankings having it this way? I have had spots #1 &2 on Google for my keywords- until the site was redesigned...wondering if this was part of the reason. I have moved some of the text up to be visible on some of the pages - but it makes the site look cramped - and competes with the ease of use the site design Any insight on this is appreciated.
Intermediate & Advanced SEO | | Cheri110 -
Using the right Schema.org - & is there a penalty in using the wrong one?
Hi We have a set of reviewed products (in this case restaurants) that total an average rating of 4.0/5.0 from 800 odd reviews. We know to use schema/restaurant for individual restaurants we promote but what about for a list of cities, say restaurants in boston for example. For the product page containing all of Boston restaurants - should we use schema.org/restaurant (but its not 1 physical restaurant) or schema.org - product + agg review score? What do you do for your product listing pages? If we get it wrong, is there a penalty? Or this just simply up to us?
Intermediate & Advanced SEO | | xoffie1 -
Fetch as GoogleBot "Unreachable Page"
Hi, We are suddenly having an error "Unreachable Page" when any page of our site is accessed as Googlebot from webmaster tools. There are no DNS errors shown in "Crawl Errors". We have two web servers named web1 and web2 which are controlled by a software load balancer HAProxy. The same network configuration has been working for over a year now and never had any GoogleBot errors before 21st of this month. We tried to check if there could be any error in sitemap, .htaccess or robots.txt by excluding the loadbalancer and pointing DNS to web1 and web2 directly and googlebot was able to access the pages properly and there was no error. But when loadbalancer was made active again by pointing the DNS to it, the "unreachable page" started appearing again. This very same configuration has been working properly for over a year till 21st of this month. Website is properly accessible from browser and there are no DNS errors either as shown by "Crawl Errors". Can you guide me about how to diagnose the issue. I've tried all sorts of combinations, even removed the firewall but no success. Is there any way to get more details about error instead of just "Unreachable Page" error ? Regards, shaz
Intermediate & Advanced SEO | | shaz_lhr0