Content available only on log-in/ sign up - how to optimise?
-
Hi Mozzers.
I'm working on a dev brief for a site with no search visibility at all. You have to log in (well, sign up) to the site (via Facebook) to get any content. Usability issues of this aside, I am wondering what are the possible solutions there are to getting content indexed.
I feel that there are two options:
1. Pinterest-style: this gives the user some visibility of the content on the site before presenting you with a log in overlay. I assume this also allows search engines to cache the content and follow the links.
2. Duplicate HTTP and HTTPS sites. I'm not sure if this is possible in terms of falling foul of the "showing one thing to search engines and another thing to users" guidelines. In my mind, you would block robots from the HTTPS site (and show it to the users where log in etc is required) but URLs would canonicalise to the HTTP version of the page, which you wouldn't present to the users, but would show to the search engines. The actual content on the pages would be the same.
I wonder if anyone knows any example of large(ish) websites which does this well, or any options I haven't considered here.
Many thanks.
-
Thanks Justin and Bruce,
I think I will try and push for the "limited view until signed in" solution. The HTTP/ HTTPS one just feels a bit too much like a dirty hack that will end up hurting in some way, at some point!
Thanks for your responses.
-
Could you model your approach after other subscription sites? Take, for example, the online version of the Wall Street Journal: http://online.wsj.com/home-page. They present enough content in preview mode to be relevant to both users and Google. You know from the blurb what the story is basically about.
Once someone logs in, they get the rest of the content. But I don't think they get a separate URL.
I wouldn't do the duplicate HTTP/HTTPS approach. In the future, you may want the whole site to be HTTPS, so you'd have to face this issue again.
-
Hi Pascale
If the content is visible to the "not signed in end user" then it is visible to google. If it is not, it is not visible to Google.
I might have this wrong, but it would appear that you have a pinterest style site and that you want further content only be visible when the user is logged in? This then would be a site settings and not crawl issue. This is a trgger on the website server to require the guest to log in after XYZ. The whole site is opened to crawl but you set these parameters for the guest user in your sites back office
I think it is a case of either or, not both
Bruce
edit typo
-
Hi Bruce,
Thanks for your response. I agree - that the whole point of login is to to stop unwanted visitors seeing private content. For the most part.
This is not a log in in that same way - it's more of a "sign up" so like Pinterest or DueDil - you have to sign up in order to view the content.
I hope that makes more sense and I will modify the title (if I can) to make it clearer.
Thanks
-
If the content is for Logged in Users, why would you want it crawled?
Google crawls sites open to the public, therefore if the site is behind a login, then google will not crawl it. If google crawls it, then the content will show up in search results, hence making the login process redundant.
If you want to offer subscription content, then this is a marketing issue, not a crawl issue. You will need to have open content available that the viewing will perhaps then make a call whether to subscribe to your site or not.
Remember login is a cloaking devise, designed to stop unwanted visitors viewing the content, hence why google will view this in the same way.
Hope that helps
Bruce
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Handling of Duplicate Content
I just recently signed and joined the moz.com system. During the initial report for our web site it shows we have lots of duplicate content. The web site is real estate based and we are loading IDX listings from other brokerages into our site. If though these listings look alike, they are not. Each has their own photos, description and addresses. So why are they appear as duplicates – I would assume that they are all too closely related. Lots for Sale primarily – and it looks like lazy agents have 4 or 5 lots and input the description the same. Unfortunately for us, part of the IDX agreement is that you cannot pick and choose which listings to load and you cannot change the content. You are either all in or you cannot use the system. How should one manage duplicate content like this? Or should we ignore it? Out of 1500+ listings on our web site it shows 40 of them are duplicates.
Technical SEO | | TIM_DOTCOM0 -
Wordpress tags and duplicate content?
I've seen a few other Q&A posts on this but I haven't found a complete answer. I read somewhere a while ago that you can use as many tags as you would like. I found that I rank for each tag I used. For example, I could rank for best night clubs in san antonio, good best night clubs in san antonio, great best night clubs in san antonio, top best night clubs in san antonio, etc. However, I now see that I'm creating a ton of duplicate content. Is there any way to set a canonical tag on the tag pages to link back to the original post so that I still keep my rankings? Would future tags be ignored if I did this?
Technical SEO | | howlusa0 -
Duplicate content in product listing
We have "duplicate content" warning in our moz report which mostly revolve around our product listing (eCommerce site) where various filters return 0 results (and hence show the same content on the page). Do you think those need to be addressed, and if so how would you prevent product listing filters that appearing as duplicate content pages? should we use rel=canonical or actually change the content on the page?
Technical SEO | | erangalp0 -
Disallow: /search/ in robots but soft 404s are still showing in GWT and Google search?
Hi guys, I've already added the following syntax in robots.txt to prevent search engines in crawling dynamic pages produce by my website's search feature: Disallow: /search/. But soft 404s are still showing in Google Webmaster Tools. Do I need to wait(it's been almost a week since I've added the following syntax in my robots.txt)? Thanks, JC
Technical SEO | | esiow20130 -
To 301 redirect or not to 301 redirect? duplicate content problem www.domain.com and www.domain.com/en/
Hello, If your website is getting flagged for duplicate content from your main domain www.domain.com and your multilingual english domain www.domain.com/en/ is it wise to 301 redirect the english multilingual website to the main site? Please advise. We've recently installed the joomish component to one of our joomla websites in an effort to streamline a spanish translation of the website. The translation was a success and the new spanish webpages were indexed but unfortunately one of the web developers enabled the english part of the component and some english webpages were also indexed under the multilingual english domain www.domain.com/en/ and that flagged us for duplicate content. I added a 301 redirect to redirect all visitors from the www.domain/en/ webpages to the main www.domain.com/ webpages. But is that the proper way of handling this problem? Please advise.
Technical SEO | | Chris-CA0 -
Duplicate Content
Hi - We are due to launch a .com version of our site, with the ability to put prices into local currency, whereas our .co.uk site will be solely £. If the content on both the .com and .co.uk sites is the same (at product level mainly), will we be penalised? What is the best way to get around this?
Technical SEO | | swgolf1230 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0 -
Duplicate content connundrum
Hey Mozzers- I have a tricky situation with one of my clients. They're a reputable organization and have been mentioned in several major news articles. They want to create a Press page on their site with links to each article, but they want viewers to remain within the site and not be redirected to the press sites themselves. The other issue is some of the articles have been removed from the original press sites where they were first posted. I want to avoid duplicate content issues, but I don't see how to repost the articles within the client's site. I figure I have 3 options: 1. create PDFs (w/SEO-friendly URLs) with the articles embedded in them that open in a new window. 2. Post an image with screenshot of article on a unique URL w/brief content. 3. Copy and paste the article to a unique URL. If anyone has experience with this issue or any suggestions, I would greatly appreciate it. Jaime Brown
Technical SEO | | JamesBSEO0