Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Have Your Thoughts Changed Regarding Canonical Tag Best Practice for Pagination? - Google Ignoring rel= Next/Prev Tagging
-
Hi there,
We have a good-sized eCommerce client that is gearing up for a relaunch. At this point, the staging site follows the previous best practice for pagination (self-referencing canonical tags on each page; rel=next & prev tags referencing the last and next page within the category).
Knowing that Google does not support rel=next/prev tags, does that change your thoughts for how to set up canonical tags within a paginated product category? We have some categories that have 500-600 products so creating and canonicalizing to a 'view all' page is not ideal for us. That leaves us with the following options (feel it is worth noting that we are leaving rel=next / prev tags in place):
- Leave canonical tags as-is, page 2 of the product category will have a canonical tag referencing ?page=2 URL
- Reference Page 1 of product category on all pages within the category series, page 2 of product category would have canonical tag referencing page 1 (/category/) - this is admittedly what I am leaning toward.
Any and all thoughts are appreciated! If this were in relation to an existing website that is not experiencing indexing issues, I wouldn't worry about these. Given we are launching a new site, now is the time to make such a change.
Thank you!
Joe
-
An old question, but thought I'd weigh in with to report that Google seems to be ignoring self-referring pagination canonicals on a news site that I'm working on.
Pages such as /news/page/36/ have themselves as declared canonicals, but Search Console reports that Google is selecting the base page /news/ as the canonical instead.
Would be interested to know if anyone else is seeing that.
-
Hi,
I'm also very interested in what the new best approach for pagination would be.
In a lot of webshops, option 2 is used. However, in this article the possible negative outcome of this option is described (search the article for 'Canonicalize to the first page'). In my opinion, this is particularly true for paginated blog articles, and less so for paginated results of products per category in webshops. I think the root page is the one you want to rank in the end.
What you certainly don't want, is create duplicate content. Yes, your products (and of course their links to the product pages) are different for each page. And yes, there will be also more internal links pointing to the root category page, and not to the second or third results page. But if you invested time in writing content for your category, and invested time in all the other on page optimizations, these will be the same across all your result pages.
So in the end, we leave it to Google and hope that they do recognize your pagination. Is this the best option? Maybe, maybe not. Anyway, we didn't know that they didn't use rel=next/prev for several years, and mostly it worked fine.
So I think in the end EffectDigital is right, just do nothing. If you see problems, I would try option 2, using your first results page as canonical.
-
The only thing it changes IMO is delete rel=prev / next tags to save on code bloat. Other than that, nothing changes in my opinion. It's still best to allow Google to rank paginated URLs if Google chooses to do so - as it usually happens for a reason!
I might lift the self referencing canonicals, maybe. Just leave them without directives of any kind, and force Google to determine what to do with them via URL structure ('?p=', '/page/', '?page=' etc). If they're so confident they don't need these tags now, maybe using any directives at all is just creating polluting signals that will unnecessarily interfere
In the end I think I'd just strip it all off and monitor it, see what happened
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Bing Indexation and handling of X-ROBOTS tag or AngularJS
Hi MozCommunity, I have been tearing my hair out trying to figure out why BING wont index a test site we're running. We're in the midst of upgrading one of our sites from archaic technology and infrastructure to a fully responsive version.
Web Design | | AU-SEO
This new site is a fully AngularJS driven site. There's currently over 2 million pages and as we're developing the new site in the backend, we would like to test out the tech with Google and Bing. We're looking at a pre-render option to be able to create static HTML snapshots of the pages that we care about the most and will be available on the sitemap.xml.gz However, with 3 completely static HTML control pages established, where we had a page with no robots metatag on the page, one with the robots NOINDEX metatag in the head section and one with a dynamic header (X-ROBOTS meta) on a third page with the NOINDEX directive as well. We expected the one without the meta tag to at least get indexed along with the homepage of the test site. In addition to those 3 control pages, we had 3 pages where we had an internal search results page with the dynamic NOINDEX header. A listing page with no such header and the homepage with no such header. With Google, the correct indexation occured with only 3 pages being indexed, being the homepage, the listing page and the control page without the metatag. However, with BING, there's nothing. No page indexed at all. Not even the flat static HTML page without any robots directive. I have a valid sitemap.xml file and a robots.txt directive open to all engines across all pages yet, nothing. I used the fetch as Bingbot tool, the SEO analyzer Tool and the Preview Page Tool within Bing Webmaster Tools, and they all show a preview of the requested pages. Including the ones with the dynamic header asking it not to index those pages. I'm stumped. I don't know what to do next to understand if BING can accurately process dynamic headers or AngularJS content. Upon checking BWT, there's definitely been crawl activity since it marked against the XML sitemap as successful and put a 4 next to the number of crawled pages. Still no result when running a site: command though. Google responded perfectly and understood exactly which pages to index and crawl. Anyone else used dynamic headers or AngularJS that might be able to chime in perhaps with running similar tests? Thanks in advance for your assistance....0 -
Internal Linking: What is the best practice for pages not included in Nav bar?
I never quite understood why internal linking was such a big deal for SEO, but now I'm having second thoughts and perhaps understanding it more. I always thought since most websites have a navigation feature--usually the menu bar located at the top and often another one in the footer--that internal navigation was usually already built in to most websites and therefore, a silly topic to make a fuss over; however, I may be the silly one after all. I am now creating pages that are not included in the navigation so.... What is the best practice for this? If I am creating say, pages for certain locations and those location pages begin to number in the hundreds, it makes my navigation bar a little too cumbersome to have all those pages in a drop down menu. So I made a Locations page and just link to all those pages from that page (and from nowhere else). But now I'm wondering if this could be a bad internal linking practice and perhaps hurt my online visibility as an SEO ranking factor. Is this a crawl problem? And if so, is there a better option that provides a good visitor experience while appeasing the search engines.
Web Design | | Dino640 -
Privacy Policy: index it/? And where to place it?
Hi Everyone, Two questions, first: should you allow google to index your privacy policy? Second: for a service based site (not e-commerce, not selling anything) should you put the policy in the footer so it's site wide or just on the "contact us" form page? Best, Ruben
Web Design | | KempRugeLawGroup0 -
Should i not use hyphens in web page titles? Google Penalty for hyphens?
all the page titles in my site have hyphens between the words like this: http://texas.com/texas-plumbers.html I have seen tests where hyphenated domain names ranked lower than non hyphenated domain names. Does this mean my pages are being penalized for hyphens or is this only in the domain that it is penalized? If I create new pages should I not use hyphens in the page titles when there are two or more words in the title? If I changed all my page titles to eliminate the hyphens, I would lose all my rankings correct? My site is 12 years old and if I changed all these titles I'm guessing that each page would be thrown in the google sandbox for several months, is this true? Thanks mozzers!
Web Design | | Ron100 -
Is it cloaking/hiding text if textual content is no longer accessible for mobile visitors on responsive webpages?
My company is implementing a responsive design for our website to better serve our mobile customers. However, when I reviewed the wireframes of the work our development company is doing, it became clear to me that, for many of our pages, large parts of the textual content on the page, and most of our sidebar links, would no longer be accessible to a visitor using a mobile device. The content will still be indexable, but hidden from users using media queries. There would be no access point for a user to view much of the content on the page that's making it rank. This is not my understanding of best practices around responsive design. My interpretation of Google's guidelines on responsive design is that all of the content is served to both users and search engines, but displayed in a more accessible way to a user depending on their mobile device. For example, Wikipedia pages have introductory content, but hide most of the detailed info in tabs. All of the information is still there and accessible to a user...but you don't have to scroll through as much to get to what you want. To me, what our development company is proposing fits the definition of cloaking and/or hiding text and links - we'd be making available different content to search engines than users, and it seems to me that there's considerable risk to their interpretation of responsive design. I'm wondering what other people in the Moz community think about this - and whether anyone out there has any experience to share about inaccessable content on responsive webpages, and the SEO impact of this. Thank you!
Web Design | | mmewdell0 -
From Google Sites to Wordpress - Anyone Ventured this SEO terrain?
We have a few sites in Google Sites - and they are ugly! We have a majority (40+) of websites in Wordpress. But we have a few websites just stuck on Google Sites, and since Google won't let you fully edit the HTML, add scripts, or implement any technology since 2000, we want to move. The sad problem - the Google sites are ranking well. We rank well in Manhattan, Atlanta, Dallas, and Philadelphia. The problem is - the sites do not give much room for growth - and the bounce rate is high because they are so ugly. Has Anyone moved from Google sites to Wordpress? Should we just stay with Google and bite the ugly bullet? My fear is that these sites will not allow for growth. It is hard to update them and even harder to make them look nice. To get a sample - beware: www.counselingphiladelphia.com Even another reason to leave: The slider is non-semantic and terrible SEO. Google won't allow a slider script with tags and a hrefs, so the only way to implement a slider is through a Google Docs Presentation that keeps sliding. I know - terrible SEO (#donthate) but we needed something. Any advice and thoughts would help! Thanks Mozzers!
Web Design | | _Thriveworks0 -
Best method to stop crawler access to extra Nav Menu
Our shop site has a 3 tier drop down mega-menu so it's easy to find your way to anything from anywhere. It contains about 150 links and probably 300 words of text. We also have a more context-driven single layer of sub-category navigation as well as breadcrumbs on our category pages. You can get to every product and category page without using the drop down mega-menu. Although the mega-menu is a helpful tool for customers, it means that every single page in our shop has an extra 150 links on it that go to stuff that isn't necessarily related or relevant to the page content. This means that when viewed from the context of a crawler, rather than a nice tree like crawling structure, we've got more of an unstructured mesh where everything is linked to everything else. I'd like to hide the mega-menu links from being picked up by a crawler, but what's the best way to do this? I can add a nofollow to all mega-menu links, but are the links still registered as page content even if they're not followed? It's a lot of text if nothing else. Another possibility we're considering is to set the mega-menu to only populate with links when it's main button is hovered over. So it's not part of the initial page load content at all. Or we could use a crude yet effective system we have used for some other menus we have of base encoding the content inline so it's not readable by a spider. What would you do and why? Thanks, James
Web Design | | DWJames0 -
Where is the best place to put reciprocal links on our website?
Where should reciprocal links be placed on our website? Should we create a "Resources" page? Should the page be "hidden" from the public? I know there is a right answer out there! Thank you for your help! Jay
Web Design | | theideapeople0