Duplicate page title at bottom of page - ok, or bad?
-
Can I get you experts opinion?
A few years ago, we customized our pages to repeat the page title at the bottom of the page.
So the page title is in the breadcrumbs at the top, and then it's also at the bottom of the page under all the contents. Here is a sample page: bit.ly/1pYyrUl
I attached a screen shot and highlighted the second occurence of the page title.
Am worried that this might be keyword stuffing, or over optimizing?
Thoughts or advice on this?
Thank you so much!
ron
-
Hi David,
Wow, every person who replied has helped me greatly. Thank you!
That tool link you sent me was great. I am using that as we speak. I like the Similar Page Checker tool you have. Question - at what % threshold would you think about consolidating or deleting a similar page? I have located 2 sets of pages thus far that are very similar.
We initially created them to go after similar variations of a keyword. Ie, "mens-titanium-rings" and "mens-titanium-wedding-bands". But the products are identical. That set of pages has a 54% similarity. Would you ditch one of them? I'm not sure where the threshold should be.
Thanks again!
-
Hi Ashley,
Thanks for your help! I really appreciate it. I like your idea of changing the image to an html so we can get the keywords in an h1 tag. I didn't think about that.
I love moz because of you guys!
-
Hi Matt,
Thanks so much for the great feedback! Question on adding more content to this page ( bit.ly/1pYyrUl ) : If we add a few more sentences of valuable content, like facts, or benefits of wearing a titanium ring, and place this in the caption area it will push down the sub categories. Won't that (possibly) lower conversion rates?
What about adding the content to bottom of page?
Thanks for the recommendation for ScreamingFrog. I'm downloading it now!
-
Doubtful that one mention of the work or keyword phrase is having that much of an effect. Ecommerce store frequently have a higher keyword density, and I think Google can tell the difference. We have done work for multiple ecom clients, in which the competition would stuff the living daylights out of the meta and on page and still rank very high. Not saying its a great idea, just noting an observation.
As to your situation, I wouldn't bother with changing it. I would instead look at your on-page keyword density to avoid it getting too high.
Here, use this: http://goo.gl/cdPni2
-
Hi Ron,
I think that it doesn't really serve any benefit to the user and in my opinion it does look at little spammy - another way of trying to get "titanium rings" mentioned on your page again. I would personally consider removing it. I would also consider adding some more content on your pages where possible - maybe some more crossover with your blog, as you mention "titanium rings" a lot of times in a small amount of text. I know this is the product you sell, but from my experience increasing the content on page with relevant valuable information will help with your optimisation.
On a side note I would recommend you run your site through a tool such as screaming frog - http://www.screamingfrog.co.uk/seo-spider/ if you haven't already as this will give you a good insight into how your site is optimised.
Hope this helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Validated pages on GSC displays 5x more pages than when performing site:domain.com?
Hi mozzers, When checking the coverage report on GSC I am seeing over 649,000 valid pages https://cl.ly/ae46ec25f494 but when performing site:domain.com I am only seeing 130,000 pages. Which one is more the source of truth especially I have checked some of these "valid" pages and noticed they're not even indexed?
Intermediate & Advanced SEO | | Ty19860 -
How will canonicalizing an https page affect the SERP-ranked http version of that page?
Hey guys, Until recently, my site has been serving traffic over both http and https depending on the user request. Because I only want to serve traffic over https, I've begun redirecting http traffic to https. Reviewing my SEO performance in Moz, I see that for some search terms, an http page shows up on the SERP, and for other search terms, an https page shows. (There aren't really any duplicate pages, just the same pages being served on either http or https.) My question is about canonical tags in this context. Suppose I canonicalize the https version of a page which is already ranked on the SERP as http. Will the link juice from the SERP-ranked http version of that page immediately flow to the now-canonical https version? Will the https version of the page immediately replace the http version on the SERP, with the same ranking? Thank you for your time!
Intermediate & Advanced SEO | | JGRLLC0 -
What to do when your home page an index for a series of pages.
I have created an index stack. My home page is http://www.southernwhitewater.com The home page is the index itself and the 1st page http://www.southernwhitewater.com/nz-adventure-tours-whitewater-river-rafting-hunting-fishing My home page (if your look at it through moz bat for chrome bar} incorporates all the pages in the index. Is this Bad? I would prefer to index each page separately. As per my site index in the footer What is the best way to optimize all these pages individually and still have the customers arrive at the top to a picture. rel= canonical? Any help would be great!! http://www.southernwhitewater.com
Intermediate & Advanced SEO | | VelocityWebsites0 -
Duplicate currency page variations?
Hi guys, I have duplicate category pages across a ecommerce site. http://s30.postimg.org/dk9avaij5/screenshot_160.jpg For the currency based pages i was wondering would it be best (or easier) to exclude them in the robots.txt or use a rel canonical? If using the robots.txt (would be much easier to implement then rel canonical) to exclude the currency versions from being indexed what would the correct exclusion be? Would it look something like: Disallow: */?currency/ Google is indexing the currency based pages also: http://s4.postimg.org/hjgggq1tp/screenshot_161.jpg Cheers,
Intermediate & Advanced SEO | | jayoliverwright
Chris0 -
Duplicate content - how to diagnose duplicate content from another domain before publishing pages?
Hi, 🙂 My company is having new distributor contract, and we are starting to sell products on our own webshop. Bio-technology is an industry in question and over 1.000 products. Writing product description from scratch would take many hours. The plan is to re-write it. With permission from our contractors we will import their 'product description' on our webshop. But, I am concerned being penalies from Google for duplicate content. If we re-write it we should be fine i guess. But, how can we be sure? Is there any good tool for comparing only text (because i don't want to publish the pages to compare URLs)? What else should we be aware off beside checking 'product description' for duplicate content? Duplicate content is big issue for all of us, i hope this answers will be helpful for many of us. Keep it hard work and thank you very much for your answers, Cheers, Dusan
Intermediate & Advanced SEO | | Chemometec0 -
How can I prevent duplicate pages being indexed because of load balancer (hosting)?
The site that I am optimising has a problem with duplicate pages being indexed as a result of the load balancer (which is required and set up by the hosting company). The load balancer passes the site through to 2 different URLs: www.domain.com www2.domain.com Some how, Google have indexed 2 of the same URLs (which I was obviously hoping they wouldn't) - the first on www and the second on www2. The hosting is a mirror image of each other (www and www2), meaning I can't upload a robots.txt to the root of www2.domain.com disallowing all. Also, I can't add a canonical script into the website header of www2.domain.com pointing the individual URLs through to www.domain.com etc. Any suggestions as to how I can resolve this issue would be greatly appreciated!
Intermediate & Advanced SEO | | iam-sold0 -
Duplicate
Is it harmful to have two of these which are identical in the section?
Intermediate & Advanced SEO | | Sika220 -
Category Pages For Distributing Authority But Not Creating Duplicate Content
I read this interesting moz guide: http://moz.com/learn/seo/robotstxt, which I think answered my question but I just want to make sure. I take it to mean that if I have category pages with nothing but duplicate content (lists of other pages (h1 title/on-page description and links to same) and that I still want the category pages to distribute their link authority to the individual pages, then I should leave the category pages in the site map and meta noindex them, rather than robots.txt them. Is that correct? Again, don't want the category pages to index or have a duplicate content issue, but do want the category pages to be crawled enough to distribute their link authority to individual pages. Given the scope of the site (thousands of pages and hundreds of categories), I just want to make sure I have that right. Up until my recent efforts on this, some of the category pages have been robot.txt'd out and still in the site map, while others (with different url structure) have been in the sitemap, but not robots.txt'd out. Thanks! Best.. Mike
Intermediate & Advanced SEO | | 945010