Cloaking? Best Practices Crawling Content Behind Login Box
-
Hi-
I'm helping out a client, who publishes sale information (fashion sales etc.)
In order for the client to view the sale details (date, percentage off etc.) they need to register for the site.
If I allow google bot to crawl the content, (identify the user agent) but serve up a registration light box to anyone who isn't google would this be considered cloaking?
Does anyone know what the best practice for this is? Any help would be greatly appreciated.
Thank you,
Nopadon
-
Can I say I admire your inventiveness?
You go to some lengths to not register and really, apart from the majority of people not knowing how to do a reverse image search, probably reflects people's attitude to those sorts of lightbox registration forms.
-
I'm going to respond from a human point of view and not a technical point of view.
I've been searching for houses recently on Craigslist. There are a couple of real estate agents who post ads on CL with a link to their site. When you click the link, you get a lightbox requiring that you fill out the lead form to be able to see the details of the house. I do one of two things:
-
I open up IE in private browsing mode and paste in the URL. The private browsing mode has something that prevents this script from running and I can see the house details just fine.
-
If the house address is not provided in the CL ad, I'll copy the image URL of one of the CL photos and put that into a Google reverse image search. I'll find a different website that has posted the same house and use their site that doesn't require me to register. (I realize this may not happen in your scenario above).
I agree what the other people say about not wanting provide one thing to Google and another to users, and wanted to add that people will try to find ways around the registration. I don't have a solution for you, sadly.
-
-
Heya there,
Thanks for asking your question here
My first point would be that human visitors don't like to be given forms when they first visit a site, so would suggest you don't do this.
My alternative strategy would be to provide a home page of good content talking about the data etc that is available on your site and then provide a button for people to register if they want to.
Don't detect the user agent and provide alternative content as, however good your intentions are, that could be considered cloaking. Google is against you providing Google different content to humans, so don't do it.
Do things differently
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is content on widget bar less 'seo important' than main content?
hi, i wonder if content on widget bar less 'seo important' than main content.. i mean, is better to place content and links on main cotent than on wordpress widget bar? What are the pros and cons? tx!
Technical SEO | | Dreamrealemedia0 -
Duplicate Content
I am trying to get a handle on how to fix and control a large amount of duplicate content I keep getting on my Moz Reports. The main area where this comes up is for duplicate page content and duplicate title tags ... thousands of them. I partially understand the source of the problem. My site mixes free content with content that requires a login. I think if I were to change my crawl settings to eliminate the login and index the paid content it would lower the quantity of duplicate pages and help me identify the true duplicate pages because a large number of duplicates occur at the site login. Unfortunately, it's not simple in my case because last year I encountered a problem when migrating my archives into a new CMS. The app in the CMS that migrated the data caused a large amount of data truncation Which means that I am piecing together my archives of approximately 5,000 articles. It also means that much of the piecing together process requires me to keep the former app that manages the articles to find where certain articles were truncated and to copy the text that followed the truncation and complete the articles. So far, I have restored about half of the archives which is time-consuming tedious work. My question is if anyone knows a more efficient way of identifying and editing duplicate pages and title tags?
Technical SEO | | Prop650 -
Do mobile and desktop sites that pull content from the same source count as duplicate content?
We are about to launch a mobile site that pulls content from the same CMS, including metadata. They both have different top-level domains, however (www.abcd.com and www.m.abcd.com). How will this affect us in terms of search engine ranking?
Technical SEO | | ovenbird0 -
How many tiers are the best?
I am working the SEO from my website and I don't know how many Tiers I need. I have read a lot but people always says different things. What do you think? Thanks for your help friends! Regards, Carlos Zambrana
Technical SEO | | CarlosZambrana0 -
Duplicate Content
Hello guys, After fixing the rel tag on similar pages on the site I thought that duplicate content issue were resolved. I checked HTML Improvements on GWT and instead of going down as I expected, it went up. The duplicate issues affect identical product pages which differ from each other just for one detail, let's say length or colour. I could write different meta tags as the duplicate is the meta description, and I did it for some products but still didn't have any effects and they are still showing as duplicates. What would the problem be? Cheers
Technical SEO | | PremioOscar0 -
Product Level 301 Redirects Best Practice
When creating a 301 mapping file for product pages, what is best practice
Technical SEO | | Bucktown
for which version of the URL to redirect to? Base directory or one
subdirectory/category path. Example Old URL: www.example.com/clothing/pants/blue-pants-123 Which of the following should be the new target URL: www.example.com/apparel/pants/blue-pants-123 www.example.com/apparel/blue-apparel/blue-pants-123 www.example.com/apparel/collections/spring-collection/blue-pants-123 www.example.com/blue-pants-123 This is assuming the canonical tag will be www.example.com/blue-pants-123. Also, if www.example.com/blue-pants-123 cannot be reached via site
navigation would it be detrimental to make that the target URL if Google
cannot crawl that naturally? Thanks0 -
Duplicate content, how to solve?
I have about 400 errors about duplicate content on my seomoz dashboard. However I have no idea how to solve this, I have 2 main scenarios of duplication in my site: Scenario 1: http://www.theprinterdepo.com/catalogsearch/advanced/result/?name=64MB+SDRAM+DIMM+MEMORY+MODULE&sku=&price%5Bfrom%5D=&price%5Bto%5D=&category= 3 products with the same title, but different product models, as you can note is has the same price as well. Some printers use a different memory product module. So I just cant delete 2 products. Scenario 2: toners http://www.theprinterdepo.com/brother-high-capacity-black-toner-cartridge-compatible-73 http://www.theprinterdepo.com/brother-high-capacity-black-toner-cartridge-compatible-75 In this scenario, products have a different title but the same price. Again, in this scenario the 2 products are different. Thank you
Technical SEO | | levalencia10 -
Best Personna Strategy
I'm building a site for a manufacturer where all products will be entered as blog posts. Then I have to build some awareness for the site. What personna am I best off doing this with? Company Name, My Name, A ficticious name that works for the company, or some other? Any Mozzers want to share thoughts or past strategies that have been successful?
Technical SEO | | waynekolenchuk0