E-commerce duplicate URLS
-
Hi
I just realized that my e-commerce products do not have any difference except the SKUS, PRICE and THE PRODUCT name. Apart from each page has the same sidebar and a piece of content ( same ) under each product pages. And this is the reason why i am getting too many duplicate urls warning through Moz analytics.
I do not have any other contents to add for each product because of the nature of the product. Only the price, product name and the SKUs will be different and rest will all be same for each products.
How can i fix this ?
Thanks
-
Hi Ivor
Thank you very much for your reply.
- Each product have their own title tags and h1 tags ( product name )
- The content is on each product page and it is the same for all the products. So, i think this is something i need to take care of.
-
Thank you very much for . But customer reviews are not going to work because each product is a unique product and only a single customer can purchase a product. So each product is a unique piece.
Also we do not have any attributes of the products. The only thing that is unique is the product name and the SKUs and price.
-
Hi,
You can definitely do as ivordg suggests and use the product name to create unique title and H1 tags and meta descriptions to get you started. Another strategy would be to combine all of the products in to one page but this depends on the product and what's unique about them. If only the colour and size change for example, this could the way to go. You can then use drop downs to let users select the product they want and add a canonical tag to the page. This is something Zappos.com does very well - http://www.zappos.com/cole-haan-ridley-blucher-sneaker
If that's not possible, a better long-term strategy would be to make each page truly unique which would be a great help to you. The best way to do this? Product reviews. This will not only help your SEO but also increase customer engagement and allow customers to get a better overview of your products and likely increase conversion rates. Econsultancy wrote a really good piece on the benefits of product reviews which would be worth your time reading.
There's also an Econsultancy piece here on how to attract product reviews from your customers.
Hope that helps but let us know how you get on or if you need more help.
-
Hi,
I run a webshop as well with the same issues (sanidepot.be). All my products are different in the sense that they have unique product names, sku's and prices - just like yours.
- I use the names of the products as my title tags and H1 tags, avoiding duplicate title errors.
- ...yet placing the same text content under each product doesn't seem right. Is it a footer/banner? Or really the same text added on each product page? I'd remove that text or when it's a seperate html, highlight it in your robots.txt to dissallow that page somehow. If you could, i'd disactivate it, avoiding tons of duplicate content.
- Then the result would be that your product pages have a unique character, even through they don't contain a lot of text. But the real juice for your ranking should come from your home page and all the category/subcategory pages and all landing pages that present your (core) products.
Hope this helps a bit.
Ivor
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Defining duplicate content
If you have the same sentences or paragraphs on multiple pages of your website, is this considered duplicate content and will it hurt SEO?
Intermediate & Advanced SEO | | mnapier120 -
How to avoid duplicate content
Hi there, Our client has an ecommerce website, their products are also showing on an aggregator website (aka on a comparison website where multiple vendors are showing their products). On the aggregator website the same photos, titles and product descriptions are showing. Now with building their new website, how can we avoid such duplicate content? Or does Google even care in this case? I have read that we could show more product information on their ecommerce website and less details on the aggregator's website. But is there another or better solution? Many thanks in advance for any input!
Intermediate & Advanced SEO | | Gabriele_Layoutweb0 -
Use Nonindex or Canonical on product tags of a e-commerce site
I run a e-commerce site and we have many product tags. These product tags come up as "Duplicate Page Content" when Moz does it's crawl. I was wondering if I should use Nonindex or Canonical? The tags all go to the same product when used so I figure I would just nonindex them but was wondering what's the best for SEO?
Intermediate & Advanced SEO | | EmmettButler1 -
Duplicate meta descriptions
Hi All Does having quite a few Duplicate meta descriptions hurt SEO. I am worried that I have too many and thinking this could be the reason for my recent drop in search visibility. Thanks in Advance. Andy
Intermediate & Advanced SEO | | Andy-Halliday0 -
E-commerce category page optimization - filters vs. categories
Hi, We currently have a site where there are several subcategories for every main category. So this means that visitors will have to click through 3-4 subcategories before reaching products that they could have easily found if the site would be using filters on category pages. My question is - if a subcategory page with 4 products is currently a category page (optimized heading, description) and I'd want this category to be available through filters, how do I still keep it optimized for search engines? So under a category "Cleaners", we have all cleaning products. There are 8 "Cable cleaners" under this category. This is currently a subcategory, but I'd just solve this with a filter in the "Cleaners" screen. Not sure what's right from an SEO standpoint here.
Intermediate & Advanced SEO | | JaanMSonberg0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Tracking URLS and Redirects
We have a client with many archived newsletters links that contain tracking code at the end of the URL. These old URLs are pointing to pages that don't exist anymore. Is there a way to set up permanent redirects for these old URLs with tracking code? We have tried and it doesn't seem to work. Thank you!
Intermediate & Advanced SEO | | BopDesign0 -
Blog URL Canonical
Hi Guy's, I would like to know your thoughts on the following set-up for blog canonical. Option 1 domain.com/blog = <link rel="canonical" href="domin.com/blog"> domain.com/blog-category/general = <link rel="canonical" href="domain.com/blog"> domain.com/blog-article/how-to-set-canonical = no canonical option 2 domain.com/blog = <link rel="canonical" href="domin.com blog"="">(as option 1)</link rel="canonical" href="domin.com> domain.com/blog-category/general = <link rel="canonical" href="domain.com blog-category="" general"="">(this time has the canonical of the category)</link rel="canonical" href="domain.com> domain.com/blog-article/how-to-set-canonical = <link rel="canonical" href="domain.com blog-article="" how-to-set-canonical"="">(this time has the canonical of the article full URL)</link rel="canonical" href="domain.com> Just not sure which is the best option, or even if it is any of the above! Thanks Dan
Intermediate & Advanced SEO | | Dan1e10