Solving pagination issues for e-commerce
-
I would like to ask about a technical SEO issue that may cause duplicate content/crawling issues.
For pagination, how the rel=canonical, rel="prev" rel="next" and noindex tag should be implemented.
Should all three be within the same page source?
Say for example, for one particular category we may have 10 pages of products (product catalogues). So we should noindex page 2 onwards, rel canonical it back to the first page and also rel="prev" and rel="next" each page so Google can understand they contain multiple pages.
If we index these multiple pages it will cause duplicate content issues. But I'm not sure whether all 3 tags need adding.
It's also my understanding that the search results should be noindexed as it does not provide much value as an entry point in search engines.
-
I have found this useful in the past: https://www.ayima.com/guides/conquering-pagination-guide.html
-
Thanks for your advice, I will take a look at the Google webmaster video you've referenced. As we try to rank for specific search terms in our main categories, we put content in there so it can be indexed and it's great for user experience. That's why I was thinking to also implement the rel=canonical tag so the content wasn't duplicated over a series of 10 pages, but if we noindex and use the rel=prev and next tags, that should solve the issue. It's the same for filterable results really, as the content on the page can be duplicated when users choose to filter by specific options, such as size or colour.
-
Hi Joshua,
You will need all 3 of those tags to properly markup your pagination, just not all at the same time.
Page=1 should have a canonical to the base URL (no page=X), and a rel="next" for page 2. Page 2 will have prev tag for the base level URL, and next for page 3. And so on.
Google says they don't index paginated URLs anymore, but I prefer to play it safe and implement these tags anyway.
Regarding this comment: "It's also my understanding that the search results should be noindexed as it does not provide much value as an entry point in search engines." There is some validity to this, but honestly, it's your preference. I lean on the side of preventing indexing of search results. I don't see much value in those pages being indexed, and if you're doing SEO properly, you're already providing solid entry points. Those pages will also use up a lot of your crawl budget, so that's something to consider too. Chances are, there are better sections of your site that you'd prefer bots spend their time on.
-
You shouldn't use rel canonical for pagination - it's main use is to avoid duplicate content issues. It's possible to combine it with rel next/prev but in very specific cases - example can be found here: https://support.google.com/webmasters/answer/1663744?hl=en :
rel="next" and rel="prev" are orthogonal concepts to rel="canonical". You can include both declarations. For example, http://www.example.com/article?story=abc&page=2&sessionid=123 may contain:
=> as you can see the canonical is used to strip the sessionid which could cause duplicate content issues - not to solve the pagination issue
With rel next/previous you indicate to google that the sequence of pages should be considered as one page - which makes sense if you have like 4/5 pages max. If you have a huge number of pages in a pagination this doesn't really make sense. In that case you could just decide to do nothing - or only have the first page indexed - and the other pages have a noindex/follow tag.
Hope this clarifies.
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Same product in different categories and duplicate content issues
Hi,I have some questions related to duplicate content on e-commerce websites. 1)If a single product goes to multiple categories (eg. A black elegant dress could be listed in two categories like "black dresses" and "elegant dresses") is it considered duplicate content even if the product url is unique? e.g www.website.com/black-dresses/black-elegant-dress duplicated> same content from two different paths www.website.com/elegant-dresses/black-elegant-dress duplicated> same content from two different paths www.website.com/black-elegant-dress unique url > this is the way my products urls look like Does google perceive this as duplicated content? The path to the content is only one, so it shouldn't be seen as duplicated content, though the product is repeated in different categories.This is the most important concern I actually have. It is a small thing but if I set this wrong all website would be affected and thus penalised, so I need to know how I can handle it. 2- I am using wordpress + woocommerce. The website is built with categories and subcategories. When I create a product in the product page backend is it advisable to select just the lowest subcategory or is it better to select both main category and subcategory in which the product belongs? I usually select the subcategory alone. Looking forward to your reply and suggestions. thanks
Intermediate & Advanced SEO | | cinzia091 -
Post migration issues - #11 + configuration issue
Hello Moz community. I'm keen to find out your experiences on the following: Have you ever experienced a migration whereby a large % of keywords are stuck in position #11 - post migration? The keywords do not move up or down (whilst competitors jump from 13 to 9 and vice versa) over a three month period. Please see the % difference in the attached e-mail. (sample 1,000 keyword terms) Question: Has anyone ever experienced this type of phenomenon before? If so - what was the root cause of this and did this happen post migration? What solution did you use to rectify this? Have you ever seen a cross-indexing issue between two domains (each domain serves a different purpose) post migration, which impacts the performance of the main brand domain? I will explain a little further - say you have www.example.com (brand site) and www.example-help.com (customer service site) and the day the brand website is migrated (same domain - just different file structure), www.example-help.com points to the same server that www.example.com is on (with a different file structure) and starts to inherit the legacy file structure. For example, the following is implemented on migration day: I will explain a little further - say you have www.example.com (brand site) and www.example-help.com (customer service site) and the day the brand website is migrated (same domain - just different file structure), www.example-help.com points to the same server that www.example.com is on (with a different file structure) and starts to inherit the legacy file structure. For example, the following is implemented on migration day: For example, the following is implemented on migration day: www.example.com/fr/widgets-purple => 301s to www.example.com/fr/widgets/purple But www.example-help.com now points to the same server where the customer service content is now hosted. So although the following is rendered: So although the following is rendered correctly: www.example-help.com/how-can-we-help We also have the following indexed in Google.fr - competing for the same keyword terms and the main brand website has dropped in rankings: www.example-help.com/fr/widgets-purple [legacy content from main brand website] Even when legacy content is 301 redirected from www.example-help.com to www.example.com, the authority isn't passed across and we now have www.example.com (as per Q1) a lot lower in Google than pre-migration. Question: Have you ever experienced a cross-indexing issue like above whereby Google potentially isn't passing authority across from legacy to the new setup? I'm very keen to hear your experiences on these two subjects and whether you have had similar problems on some of your domains. E0hbb
Intermediate & Advanced SEO | | SMVSEO0 -
Htaccess Issue: URL not resolving properly
I am merging a niche site, tshirts.com to another site mainsite.com. I am using an htaccess file on a linux server, and the homepage of the niche site is being directed to the corresponding category page on the main site (i.e nichesite.com to mainsite.com/niche.html). Everything else is also a page to page redirect. I have something like this in the htaccess file: Redirect 301 http://tshirts.com/ http://www.mainsite.com/tshirts.html
Intermediate & Advanced SEO | | inhouseseo
Redirect 301 http://tshirts.com/blue.html http://www.lampclick.com/blue-t-shirts.html
Redirect 301 http://tshirts.com/white.html http://www.mainsite.com/white-t-shirts.html
Redirect 301 http://tshirts.com/black-tshirts.html http://www.mainsite.com/bk-t-shirts.html When I check 301 for lets say http://tshirts.com/blue.html, I get: http://tshirts.com/blue.html -** 301 Moved Permanently** http://www.mainsite.com/tshirts.htmlblue.html -** 302 Found** http:www.mainsite.com/ How do I fix this? Why is everything being appended to minsite/tshirts.html? I appreciate your help.0 -
Consensus on disavowing low-quality auto-generated links (e.g. webstatsdomain.org etc) ?
Is there a consensus in the SEO world around the best practice on how to treat the multiple auto-generated links for a domain? With a lot of the link profiles we have been analyzing nearly 70% volume of the backlinks relate to these auto generated links (e.g. similarweb.com, informer.com, webstatsdomain.org etc) I can see arguments for disavowing them (low-quality links) as well as keeping them (skew anchor text distribution towards URL mentions, natural link profile) but would be interested if people have run experiments or prefer strongly one way or the other.
Intermediate & Advanced SEO | | petersocapro1 -
GWT url parameter issue/question
Hi Moz community, I'm having an issue with URL parameters in GWT. The tracking taxonomy for my websites is used as either /?izid=... (internal) OR /?dzid=... (external) I put tracking parameters in GWT as izid & dzid, but it hasn't picked up any URLs or examples in regards to these parameters. It's been about 2 months since we've started using this so I want to make sure Google isn't indexing as duplicate content. Side note: any page that uses a tracking parameter automatically adds rel="canonical" to the original page. Could this be the reason that GWT doesn't pick up any URLs for tracking parameters and/or do I not need to worry about adding paramters if I already have the canonical attribute automatically in place. Thanks for your help,
Intermediate & Advanced SEO | | IceIcebaby
-Reed0 -
If I had an issue with a friendly URL module and I lost all my rankings. Will they return now that issue is resolved next time I'm crawled by google?
I have 'magic seo urls' installed on my zencart site. Except for some reason no one can explain why or how the files were disabled. So my static links went back to dynamic (index.php?**********) etc. The issue was resolved with the module except in that time google must have crawled my site and I lost all my rankings. I'm nowher to be found in the top 50. Did this really cause such an extravagant SEO issue as my web developers told me? Can I expect my rankings to return next time my site is crawled by google?
Intermediate & Advanced SEO | | Pete790 -
Will implementing a 'Scroll to Div Anchor' cause a duplicate content issue?
I have just been building a website for a client with pages that contain a lot of text content. To make things easier for site visitors I have created a menu bar that sticks to the top of the page and the page will scroll to different areas of content (i/e different Div id anchors) Having done this I have just had the thought that this might inadvertently introduce duplicate content issue. Does anyone know if adding an #anchor to the end of a url will cause a duplicate content error in google? For example, would the following URLs be treated as different:- http://www.mysite.co.uk/services
Intermediate & Advanced SEO | | AdeLewis
http://www.mysite.co.uk/services#anchor1
http://www.mysite.co.uk/services#anchor2
http://www.mysite.co.uk/services#anchor3
http://www.mysite.co.uk/services#anchor4 Thanks.0 -
Diagnosing duplicate content issues
We recently made some updates to our site, one of which involved launching a bunch of new pages. Shortly afterwards we saw a significant drop in organic traffic. Some of the new pages list similar content as previously existed on our site, but in different orders. So our question is, what's the best way to diagnose whether this was the cause of our ranking drop? My current thought is to block the new directories via robots.txt for a couple days and see if traffic improves. Is this a good approach? Any other suggestions?
Intermediate & Advanced SEO | | jamesti0