Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Log in, sign up, user registration and robots
-
Hi all,
We have an accommodation site that asks users only to register when they want to book a room, in the last step. Though this is the ideal situation when you have tons of users, nowadays we are having around 1500 - 2000 per day and making tests we found out that if we ask for a registration (simple, 1 click FB) we mail them all and through a good customer service we are increasing our sales.
That is why, we would like to ask users to register right after the home page ie Home/accommodation or and all the rest. I am not sure how can I make to make that content still visible to robots.
Will the authentication process block google crawling it? Maybe something we can do?We are not completely sure how to proceed so any tip would be appreciated.
Thank you all for answering.
-
For implementing early user registration without hindering SEO, consider using dynamic rendering to serve content to Google’s crawlers; this method can maintain visibility while capturing user details upfront. For more tailored strategies, consult with experts at First Growth Agency.
-
The registration process on most websites is pretty straightforward. You enter your email, you create a password, and then you are done with it. Also try instagram mod apk unlimited likes and followers.
-
-
-
Yes it is better to ask the users to register right after the homepage but it will take some time that is the main reason you should apply some different tactics.
-
Just to give you un update, our IT solved that with CSS. The code is visible but it appears a CSS login over that does not really allow you to see much more until you log in.
It is working.
-
Correct. If you have a wall Googlebot won't index it unless you make some sort of exception for it (and even then Google frowns on walled off content). SEM had great article on this talking about Google's rules for walled news content (may not apply to you but interesting nonetheless).
I would put your wall behind your content, not in front.
-
Thank you Highland for the answer.
Therefore, I understand that there is not any way for robots to pass where there autentification requirements. Right? Just to confirm. This is our main concern, we get 30% of our SEO results directly to rooms and we would not like to loose those.
We already made the A/B tests and check the conversion rates and though we know we are loosing some users and making bounce higher the sales rates are much higher (about 30%).
We are working to solve this as well improving the product and the site but that would be other completely different thing.Seems like making a decission where to ask to register is the real important thing then
-
You can go this route easily enough, it just requires a deliberate decision as to which is public and which is behind your login-wall. Put a different way, you're going to need some public pages that explain your site, how the process works, etc. Once you've established what is necessary from a user and SEO perspective, then you can wall off your content behind a login.
You also need to experiment some with your funnel. If you present your wall on the first page after the home page, is that going to drive conversions (registrations in this case) up or down? Maybe your users are reading 3-4 pages before registering. Where is the sweet spot? A-B test. Funnel test. Be careful that you don't go "Hey, registrations increased sales so we need everyone to register!" because you might hurt sales down the road if less people register.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have two robots.txt pages for www and non-www version. Will that be a problem?
There are two robots.txt pages. One for www version and another for non-www version though I have moved to the non-www version.
Technical SEO | | ramb0 -
Disallow wildcard match in Robots.txt
This is in my robots.txt file, does anyone know what this is supposed to accomplish, it doesn't appear to be blocking URLs with question marks Disallow: /?crawler=1
Technical SEO | | AmandaBridge
Disallow: /?mobile=1 Thank you0 -
CSS user select and any potential affect on SEO
Hi everyone and thank you in advance for your helpful comments. We have a client who is concerned about copying of content from their site because it has happened a few times in the last few years. We have explained that the content is essentially publicly available and that using the CSS selector user-select to prevent selection of text will really only prevent the technically limited users from working out how to get the text. He is happy that it will at least stop some people. So the question is would there be any way that this would have an affect on SEO? We would make an assumption that it doesnt but putting it out there for some feedback. Cheers Eddie
Technical SEO | | vital_hike0 -
No indexing url including query string with Robots txt
Dear all, how can I block url/pages with query strings like page.html?dir=asc&order=name with robots txt? Thanks!
Technical SEO | | HMK-NL0 -
Allow or Disallow First in Robots.txt
If I want to override a Disallow directive in robots.txt with an Allow command, do I have the Allow command before or after the Disallow command? example: Allow: /models/ford///page* Disallow: /models////page
Technical SEO | | irvingw0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Invisible robots.txt?
So here's a weird one... Client comes to me for some simple changes, turns out there are some major issues with the site, one of which is that none of the correct content pages are showing up in Google, just ancillary (outdated) ones. Looks like an issue because even the main homepage isn't showing up with a "site:domain.com" So, I add to Webmaster Tools and, after an hour or so, I get the red bar of doom, "robots.txt is blocking important pages." I check it out in Webmasters and, sure enough, it's a "User agent: * Disallow /" ACK! But wait... there's no robots.txt to be found on the server. I can go to domain.com/robots.txt and see it but nothing via FTP. I upload a new one and, thankfully, that is now showing but I've never seen that before. Question is: can a robots.txt file be stored in a way that can't be seen? Thanks!
Technical SEO | | joshcanhelp0 -
Content loc and player log tags for XML video site maps
I need a little help understanding how to create two of the required tags for a XML video site map for Google. 1. video:content_loc2.<video:player_loc< p=""></video:player_loc<></video:content_loc> Google explains their Video XML Site map requirements here:
Technical SEO | | dsexton10
www.google.com/support/webmasters/bin/answer.py?answer=80472
Using the example on this Google Web Master Help page (where they explain all six of the required tags) , here are examples of the two tags I need help with: video:content_locwww.example.com/video123.flv</video:content_loc> <video:player_loc allow_embed="yes" autoplay="ap=1">www.example.com/videoplayer.swf?video=12...video:player_loc></video:player_loc> The video I am trying to optimize is located on a page on my site:
www.mountainbikingmaine.com/races/bradbury_hawk.html
This page has an embedded Vimeo video. So I don't have the video file on my domain. It is on Vimeo. Here is source code from my page that I think provides the information I need to create the two tags that Google requires. <iframe src="<a rel=" nofollow"="" href="http://player.vimeo.com/video/24580638?title=0&byline=0&portrait=0"" target="_blank">player.vimeo.com/video/24580638?title=0&...amp;portrait=0"</a> width="400" height="533" frameborder="0"></iframe> [vimeo.com/24580638">Bradbury](<a rel=) Mountain Maine Hawk Migration Count from [vimeo.com/user3219915">dan](<a rel=) sexton Using this source from my site, can you suggest what to put in the two tags? Thanks! Dan0