Paging Question: Rel Next or Canonical?
-
Hi,
Lets say you have a category which displays a list of 20 products and pagination of up to 10 pages.
The root page has some content but when you click through the paging the content is removed leaving only the list of products.
Would it be best to apply a canonical tag on the paging back to the root or apply the prev/next tags.
I understand prev/next is good for say a 3 part article where each page holds unique content but how do you handle the above situation?
Thanks
-
Hi there,
As Eric mentioned before, the solution will depend on how much unique content is there in the paginated pages (from the main category page): If there is very very little unique content, crawling and indexing them won't really help on earning more search visibility with them (usually in these cases would be additional long-tail type of keywords for that product category) but just to consume the crawlers time and effort. Being this the case then the best way would be to canonicalize the paginated pages towards each one of their appropriate main product category page.
On the other hand, if the possibility exist to differentiate them: By featuring additional pages descriptions for each one of the paginated category pages or users reviews or ratings, or product descriptions of enough length, that can serve to give additional relevance value, then the best way to go would be to implement the rel next & prev annotations.
-
Thanks,
The root page is not the 'view all' page but I do have a dropdown which allows for all the products to be viewed.
I wouldn't want this page being the page displayed in the SE's though because so many products are being loaded it can cause a lag.
using the SEOMOZ Toolbar I can see some of the paging along with filters (view all/cheapest/highest/a-z) have some juice, and ideally I only want the root page to show in SE's, so im thinking of canonical tagging all the paging and filters back to the root category page.
Thoughts?
-
I think the best course of action depends on what you are trying to achieve. If you are trying to avoid search engines from indexing the paginated pages (since they do not contain the unique content) then a rel canonical should do the trick. If you are trying to associate all the content that is provided on all pages as one, like in your example of the 3 part article, then the rel next/prev is your best bet.
Does your site have a "view all" option?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Would You Redirect a Page if the Parent Page was Redirected?
Hi everyone! Let's use this as an example URL: https://www.example.com/marvel/avengers/hulk/ We have done a 301 redirect for the "Avengers" page to another page on the site. Sibling pages of the "Hulk" page live off "marvel" now (ex: /marvel/thor/ and /marvel/iron-man/). Is there any benefit in doing a 301 for the "Hulk" page to live at /marvel/hulk/ like it's sibling pages? Is there any harm long-term in leaving the "Hulk" page under a permanently redirected page? Thank you! Matt
Intermediate & Advanced SEO | | amag0 -
How do we decide which pages to index/de-index? Help for a 250k page site
At Siftery (siftery.com) we have about 250k pages, most of them reflected in our sitemap. Though after submitting a sitemap we started seeing an increase in the number of pages Google indexed, in the past few weeks progress has slowed to a crawl at about 80k pages, and in fact has been coming down very marginally. Due to the nature of the site, a lot of the pages on the site likely look very similar to search engines. We've also broken down our sitemap into an index, so we know that most of the indexation problems are coming from a particular type of page (company profiles). Given these facts below, what do you recommend we do? Should we de-index all of the pages that are not being picked up by the Google index (and are therefore likely seen as low quality)? There seems to be a school of thought that de-indexing "thin" pages improves the ranking potential of the indexed pages. We have plans for enriching and differentiating the pages that are being picked up as thin (Moz itself picks them up as 'duplicate' pages even though they're not. Thanks for sharing your thoughts and experiences!
Intermediate & Advanced SEO | | ggiaco-siftery0 -
Switching from HTTP to HTTPS: 301 redirect or keep both & rel canonical?
Hey Mozzers, I'll be moving several sites from HTTP to HTTPS in the coming weeks (same brand, multiple ccTLDs). We'll start on a low traffic site and test it for 2-4 weeks to see the impact before rolling out across all 8 sites. Ideally, I'd like to simply 301 redirect the HTTP version page to the HTTPS version of the page (to get that potential SEO rankings boost). However, I'm concerned about the potential drop in rankings, links and traffic. I'm thinking of alternative ways and so instead of the 301 redirect approach, I would keep both sites live and accessible, and then add rel canonical on the HTTPS pages to point towards HTTP so that Google keeps the current pages/ links/ indexed as they are today (in this case, HTTPS is more UX than for SEO). Has anyone tried the rel canonical approach, and if so, what were the results? Do you recommend it? Also, for those who have implemented HTTPS, how long did it take for Google to index those pages over the older HTTP pages?
Intermediate & Advanced SEO | | Steven_Macdonald0 -
High level rel=canonical conceptual question
Hi community. Your advice and perspective is greatly appreciated. We are doing a site replatform and I fear that serious SEO fundamentals were overlooked and I am not getting straight answers to a simple question: How are we communicating to search engines the single URL we want indexed? Backstory: Current site has major duplicate content issues. Rel-canonical is not used. There are currently 2 versions of every category and product detail page. Both are indexed in certain instances. A 60 page audit has recommends rel=canonical at least 10 times for the similar situations an ecommerce site has with dupe urls/content. New site: We are rolling out 2 URLS AGAIN!!! URL A is an internal URL generated by the systerm. We have developed this fancy dynamic sitemap generator which looks/maps to URL A and creates a SEO optimized URL that I call URL B. URL B is then inserted into the site map and the sitemap is communicated externally to google. URL B does an internal 301 redirect back to URL A...so in an essence, the URL a customer sees is not the same as what we want google to see. I still think there is potential for duplicate indexing. What do you think? Is rel=canonical the answer? In my research on this site, past projects and google I think the correct solution is this on each customer facing category and pdp: The head section (With the optimized Meta Title and Meta Description) needs to have the rel-canonical pointing to URL B
Intermediate & Advanced SEO | | mm916157
example of the meta area of URL A: What do you think? I am open to all ideas and I can provide more details if needed.0 -
Urgent Site Migration Help: 301 redirect from legacy to new if legacy pages are NOT indexed but have links and domain/page authority of 50+?
Sorry for the long title, but that's the whole question. Notes: New site is on same domain but URLs will change because URL structure was horrible Old site has awful SEO. Like real bad. Canonical tags point to dev. subdomain (which is still accessible and has robots.txt, so the end result is old site IS NOT INDEXED by Google) Old site has links and domain/page authority north of 50. I suspect some shady links but there have to be good links as well My guess is that since that are likely incoming links that are legitimate, I should still attempt to use 301s to the versions of the pages on the new site (note: the content on the new site will be different, but in general it'll be about the same thing as the old page, just much improved and more relevant). So yeah, I guess that's it. Even thought the old site's pages are not indexed, if the new site is set up properly, the 301s won't pass along the 'non-indexed' status, correct? Thanks in advance for any quick answers!
Intermediate & Advanced SEO | | JDMcNamara0 -
Should we use the rel-canonical tag?
We have a secure version of our site, as we often gather sensitive business information from our clients. Our https pages have been indexed as well as our http version. Could it still be a problem to have an http and an https version of our site indexed by Google? Is this seen as being a duplicate site? If so can this be resolved with a rel=canonical tag pointing to the http version? Thanks
Intermediate & Advanced SEO | | annieplaskett1 -
Backlinks question: High Domain Authority, Lower Page Authority
We have a possibility of contributing guest blogs (with followed backlinks) to a site with very high domain authority (and highly trafficked), but when we've looked at the blog entires they already have, most of them have a much lower page authority. How do relevant links from a page with a lower PA but on a domain with a really high DA end up impacting our overall backlink profile? Can an expert or two give me some advice on what this may mean for us if we choose to go for it? In your opinion, does having lots of relevant links from a site with a much higher domain authority than ourselves (to give you an idea, our domain authority is in the low 60's, this site has a domain authority of almost 90) worth the time/effort/resources unto itself? Thanks!
Intermediate & Advanced SEO | | GrowOrganic0 -
Are links to on-page content crawled / have any effect on page rank?
Lets say I have a really long article that begins with links to <a name="something">anchors on the same page.</a> <a name="something"></a> <a name="something">E.g.,</a> Chapter 1, Chapter 2, etc, allowing the user to scroll down to different content. There are also other links on this page that link to other pages. A few questions: Googlebot arrives on the page. Does it crawl links that point to anchors on the same page? When link juice is divided among all the links on the page, do these links count and page rank is then lost? Thanks!
Intermediate & Advanced SEO | | anthematic0