Creating a Landing Page with a Separate Domain to Control Bounce Rate
-
I work with a unique situation where we have a site that gets tons of free traffic from internal free resources. We do make revenue from this traffic, but due to its nature, it has a high bounce rate. Data shows that once someone from this source does click a second page, they are engaged, so they either bounce or click multiple pages.
After testing various landing pages, I've determined that the best solution would be to create a landing page on a separate domain and hide it from the search engines (to prevent duplicate content and the appearance of link farming). The theory is that once they click through to the site, they will bounce at a lower rate and improve the stats of the website. The landing page would essentially filter out this bad traffic.
My question is, how sound is this theory? Will this cause any issues with Google or any other search engines?
-
I fully agree with Egol - the moment you start "manipulating" traffic just to please Google, you're taking the wrong direction. I can maybe work for a few months, even years, but in the end it's always a bad strategy.
The only valid strategy is trying to figure out how to please your visitors, and traffic & Google will follow. It's not always easy to cope with the pressure for to change things, because your competitors are doing it that way, or because you have certain targets, but you'll win in the long run.
-
HI Jennifer,
I don't see bounce rate as a bad metric in this case. As long as you're generating revenue and getting quality visitors after clicking through, i think that is fine. I wouldn't waste time and money creating a new landing page and risk running into any issues on Google's side or diluting the traffic and possibly backlinks.
Maybe you can try getting more quality backlinks so that the traffic you get from those sites will click through and thus lower bounce rate. Furthermore, instead of wasting time on creating a new landing page, you can work on optimizing the current landing page so that those who usually bounce will click through.
Hope this helps.
Thank you!
-
How sound is this theory?
I'll just say what I do and why, because I don't know how google handles this.
I have lots of pages on most of my sites that are "reference materials" that a visitor might look at, get the needed info, and leave in a short time. I would not put these pages into another domain or hide them in any way. I believe that google knows "the nature of this traffic".
What I believe is important is to do all that you can to entice the visitor to view another page. Do this by creating closely related content and offering that to the visitor in very attractive links that every visitor to that page will see. I frequently use nice size image links similar to what is used by the Taboola, Outbrain and other content marketing platforms. This works well on content pages.
On retail pages, I do not fear offering links to content that tells the visitor "how to use" the product, "how to select", "ideas to have fun with" etc. All of these are good for the visitor and if they see that you have lots of good content they will buy from you.
Getting back to "the nature of the traffic". All you need to do to outrank your competitor in my opinion is to provide better engagement for the visitor because every website that receives this "nature of traffic" has the same problem. Just do better that the competitors and you will win.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Over 40+ pages have been removed from the indexed and this page has been selected as the google preferred canonical.
Over 40+ pages have been removed from the indexed and this page has been selected as the google preferred canonical. https://studyplaces.com/about-us/ The pages affected by this include: https://studyplaces.com/50-best-college-party-songs-of-all-time-and-why-we-love-them/ https://studyplaces.com/15-best-minors-for-business-majors/ As you can see the content on these pages is totally unrelated to the content on the about-us page. Any ideas why this is happening and how to resolve.
Technical SEO | | pnoddy0 -
Is my page being indexed?
To put you all in context, here is the situation, I have pages that are only accessible via an intern search tool that shows the best results for the request. Let's say i want to see the result on page 2, the page 2 will have a request in the url like this: ?p=2&s=12&lang=1&seed=3688 The situation is that we've disallowed every URL's that contains a "?" in the robots.txt file which means that Google doesn't crawl the page 2,3,4 and so on. If a page is only accessible via page 2, do you think Google will be able to access it? The url of the page is included in the sitemap. Thank you in advance for the help!
Technical SEO | | alexrbrg0 -
Does Google index internal anchors as separate pages?
Hi, Back in September, I added a function that sets an anchor on each subheading (h[2-6]) and creates a Table of content that links to each of those anchors. These anchors did show up in the SERPs as JumpTo Links. Fine. Back then I also changed the canonicals to a slightly different structur and meanwhile there was some massive increase in the number of indexed pages - WAY over the top - which has since been fixed by removing (410) a complete section of the site. However ... there are still ~34.000 pages indexed to what really are more like 4.000 plus (all properly canonicalised). Naturally I am wondering, what google thinks it is indexing. The number is just way of and quite inexplainable. So I was wondering: Does Google save JumpTo links as unique pages? Also, does anybody know any method of actually getting all the pages in the google index? (Not actually existing sites via Screaming Frog etc, but actual pages in the index - all methods I found sadly do not work.) Finally: Does somebody have any other explanation for the incongruency in indexed vs. actual pages? Thanks for your replies! Nico
Technical SEO | | netzkern_AG0 -
Blocking Test Pages Enmasse on Sub-domain
Hello, We have thousands of test pages on a sub-domain of our site. Unfortunately at some point, these pages were visible to search engines and got indexed. Subsequently, we made a change to the robots.txt file for the test sub-domain. Gradually, over a period of a few weeks, the impressions and clicks as reported by Google Webmaster Tools fell off for the test. sub-domain. We are not able to implement the no index tag in the head section of the pages given the limitations of our CMS. Would blocking off Google bot via the firewall enmasse for all the test pages have any negative consequences for the main domain that houses the real live content for our sites (which we would like to of course remain in the Google index). Many thanks
Technical SEO | | CeeC-Blogger0 -
How to create site map for large site (ecommerce type) that has 1000's if not 100,000 of pages.
I know this is kind of a newbie question but I am having an amazing amount of trouble creating a sitemap for our site Bestride.com. We just did a complete redesign (look and feel, functionality, the works) and now I am trying to create a site map. Most of the generators I have used "break" after reaching some number of pages. I am at a loss as to how to create the sitemap. Any help would be greatly appreciated! Thanks
Technical SEO | | BestRide0 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
Testimonial pages
Is it better to have one long testimonial page on your site, or break it down into several smaller pages with testimonials? First time I've posted on the forum. But I'm excited! Ron
Technical SEO | | yatesandcojewelers0