Google crawling different content--ever ok?
-
Here are a couple of scenarios I'm encountering where Google will crawl different content than my users on initial visit to the site--and which I think should be ok. Of course, it is normally NOT ok, I'm here to find out if Google is flexible enough to allow these situations:
1. My mobile friendly site has users select a city, and then it displays the location options div which includes an explanation for why they may want to have the program use their gps location. The user must choose the gps, the entire city, or he can enter a zip code, or choose a suburb of the city, which then goes to the link chosen. OTOH it is programmed so that if it is a Google bot it doesn't get just a meaningless 'choose further' page, but rather the crawler sees the page of results for the entire city (as you would expect from the url), So basically the program defaults for the entire city results for google bot, but for for the user it first gives him the initial ability to choose gps.
2. A user comes to mysite.com/gps-loc/city/results The site, seeing the literal words 'gps-loc' in the url goes out and fetches the gps for his location and returns results dependent on his location. If Googlebot comes to that url then there is no way the program will return the same results because the program wouldn't be able to get the same long latitude as that user.
So, what do you think? Are these scenarios a concern for getting penalized by Google?
Thanks, Ted
-
Thanks Cyrus. Very good points!
-
Thanks Sheena. In the second scenario good point--they are generated via user POST so in theory Google should never see them or index them, but since they can be shared Google ends up finding them, so I do need to make sure Google doesn't index them if possible.
-
This is not the definition of cloaking and I wouldn't worry too much about any penalty.
That said, anytime you redirect googlebot to a different experience than users it's a situation you want to be very careful with, and in most situations avoid. Often this is solved by serving different experiences via javascript. Even though Google is pretty darn good at parsing javascript, they will often interpret the default version of a page as if the javascript is turned off.
Regardless, I'd keep an eye on search results, Google Webmaster Tools, cached versions of your site and make ample use of "Fetch and Render" in GWT to ensure Google interprets your site they way you think it should.
-
I do not have experience with any site using this type of selector, but theoretically you should not encounter any problems as you're showing different content with the intent of improving the experience, not to deceive. If Google handles this like an ip-redirect, then you should be fine.
In scenario 2, however, I'm wondering if you even want Google to index these URLs - since it sounds like these URLs will be dynamically generated & might end up being duplicates of other pages on the site (similar to internal search pages). Something to watch out for!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Subdomain vs Subdirectory - does the content make a difference?
So I've read through all of the answers that suggest using a subdirectory is the best way to approach this - you rank more quickly and have all of your content on one site. BUT what if you're looking to move into a totally new market that your current site/content isn't in any way relevant to? Some examples are Supermarkets such as Tesco (who seem to use a mix of methods) http://www.tesco.com/groceries/, http://www.clothingattesco.com/, http://www.tesco.com/bank/ which links out from their main site to http://www.tescobank.com/ etc and Sainsburys http://www.sainsburys.co.uk/ who use subdomains - here they have their grocery offering, their bank offering, clothes, phones etc split into subdomains. If you have a product that is totally new to your Brand and different from all the products on your current site, does this change the answer to subdirectory vs subdomain? Would be great to hear your expert opinions on this. Thanks
Intermediate & Advanced SEO | | giffgaff2 -
AngularJS - How does Google go?
We're rebuilding our entire website in angularJS. We've got it rendering fine in WMT, but does that mean that it's content is detectable? I've looked into prerender.io and that seems like a great solution to the problem of not seeing any static HTML, but is it really necessary? I'm looking into this as I'm having the argument currently with my devs, and they're all certain that Google renders angularJS fine.
Intermediate & Advanced SEO | | localdirectories0 -
Opinions on Boilerplate Content
Howdy, Ideally, uniqueness for every page's title, description, and content is desired. But when a site is very, very large, it becomes impossible. I don't believe our site can avoid boilerplate content for title tags or meta-descriptions. We will, however, markup the pages with proper microdata so Google can use this information as they please. What I am curious about is boilerplate content repeated throughout the site for the purpose of helping the user, as well as to tell Google what the page is about (rankings). For instance, this page and this page offer the same type of services, but in different areas. Both pages (and millions of others) offer the exact same paragraph on each page. The information is helpful to the user, but it's definitely duplicate content. All they've changed is the city name. I'm curious, what's making this obvious duplicate content issue okay? The additional unique content throughout (in the form of different businesses), the small, yet obvious differences in on-site content (title tags clearly represent different locations), or just the fact that the site is HUGELY authorative and gets away with it? I'm very curious to hear your opinions on this practice, potential ways to avoid it, and whether or not it's a passable practice for large, but new sites. Thanks!
Intermediate & Advanced SEO | | kirmeliux0 -
Duplicate content reported on WMT for 301 redirected content
We had to 301 redirect a large number of URL's. Not Google WMT is telling me that we are having tons of duplicate page titles. When I looked into the specific URL's I realized that Google is listing an old URL's and the 301 redirected new URL as the source of the duplicate content. I confirmed the 301 redirect by using a server header tool to check the correct implementation of the 301 redirect from the old to the new URL. Question: Why is Google Webmaster Tool reporting duplicated content for these pages?
Intermediate & Advanced SEO | | SEOAccount320 -
Blog content - what to do, and what to avoid in terms of links, when you're paying for blog content
Hi, I've just been looking at a restaurant site which is paying food writers to put food news and blogs on their website. I checked the backlink profile of the site and the various bloggers in question usually link from their blogs / company websites to the said restaurant to help promote any new blogs that appear on the restaurant site. That got me wondering about whether this might cause problems with Google. I guess they've been putting about one blog live per month for 2 years, from 12/13 bloggers who have been linking to their website. What would you advise?
Intermediate & Advanced SEO | | McTaggart0 -
Unable to Crawl my Website
Hi all, I have a website that I am trying to promote, but tried to add it here in SEOMoz and got the following message: We have detected that the root domain evolving-networks.co.uk does not respond to web requests. Using this domain, we will be unable to crawl your site or present accurate SERP information. Does anyone know why this website cannot be crawled? Please help. Thank you in advance!
Intermediate & Advanced SEO | | LSDigital0 -
Is this duplicate content?
My client has several articles and pages that have 2 different URLs For example: /bc-blazes-construction-trail is the same article as: /article.cfm?intDocID=22572 I was not sure if this was duplicate content or not ... Or if I should be putting "/article.cfm" into the robots.txt file or not.. if anyone could help me out, that would be awesome! Thanks 🙂
Intermediate & Advanced SEO | | ATMOSMarketing560 -
Content linking ?
If you have links on the left hand side of the website on the Navigation and content at the bottom of the page and link to the same page with different anchor text or the same would it help the page (as it is surrounded by similar text) or is the first one counted and this is it?
Intermediate & Advanced SEO | | BobAnderson0