Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Set Canonical for Paginated Content
-
Hi Guys,
This is a follow up on this thread: http://moz.com/community/q/dynamic-url-parameters-woocommerce-create-404-errors#
I would like to know how I can set a canonical link in Wordpress/Woocommerce which points to "View All" on category pages on our webshop.
The categories on my website can be viewed as 24/48 or All products but because the quanity constantly changes viewing 24 or 48 products isn't always possible.To point Google in the right direction I want to let them know that "View All" is the best way to go.
I've read that Google's crawler tries to do this automatically but not sure if this is the case on on my website.Here is some more info on the issue: https://support.google.com/webmasters/answer/1663744?hl=en
Thanks for the help!Joost
-
Joost - that's correct! Yes, I assume woocommerce, since they are product pages.
-
Hi Dan,
Thanks for the explanation.
Ok so I block 24 and 48 for Google but users can still use them to navigate through the site.
I assume this is woocommerce related because Woocommerce creates the output for the productpages right?Thanks again!
Joost
-
Joost
I think you'll need to get a developer or someone involved to help execute, but here's the ideal scenerio:
- Add meta "noindex" tags to ?show_products=24 and 48
- Make your 'view all' URL ideally just /product-category/t-shirts/ - with no parameter - or if you have to, maybe /t-shirts/all/ - your goal here is to keep it consistent and NOT the same parameter as the other pages
- Then, whatever consistent URL you have for the 'all' - don't add "noindex" to that (keep it indexable).
- Wait for Google to remove 24/48 URLs from the index (you have to just check every week or two with site: searches)
- Once they are noindexed, block crawling with robots.txt with this line:
Disallow: /?show_products= <---but ONLY use that if you've changed your 'view all' URLs to something else! You ideally want a different URL structure for 'view all' vs. not view all to control crawling and indexation more easily.
-
Hi Dan,
Thanks for your reply.
For the category t-shirt I've got this:/product-category/t-shirts/?show_products=24 (24)
/product-category/t-shirts/?show_products=48 (48)
/product-category/t-shirts/?show_products=41 (when All selected)Let me know! And thanks again for your time! Really apreciate it!Joost
-
Hi Joost
Can you provide examples of how all your URLs are setup? What does the URL look like for view all, 24 items etc etc?
-
Wow Dan!
Thanks for looking in to this!
I assume you are totally right but have no idea how I should implement this strategy on my site. It just a plain wordpress install with woocommerce. I use Yoast (ofcourse) but never went in-depth with robot.txt.
How can I provide you with more info? Or better; myself
Thanks again,
Joost
-
Hi Joost
It would be better to just "noindex" anything except view all. Then once they are gone from the index, set a block in robots.txt so they can't be crawled anymore. That fixes the issue at the source, the canonical is more of a bandaid. So:
1. Add a meta "noindex" tag to everything except view all (I am not 100% sure how in your wordpress setup - there's no one way, it depends on your setup).
2. Monitor the indexation of these pages in Google and wait for them to be removed (you can check with just searching for the URL in the search bar).
3. Once they are all gone from the index, block crawlers from accessing them by adding a line to your robots.txt file blocking the 24/48 URLs - again, I don't know the exact code for your robots.txt because I am unsure of your URL setup, but a dev or someone can help - or feel free to write back with these details and I'll try to help further.
- topic:timeago_earlier,9 days
-
Hi Patrick,
Thanks for helping out. I've read a lot about the theory behind View All and why & when it's better to set canonicals on page 2 and 3 to View All.
But I can't seem to find any information how to implement the rel canonical in wordpress/woocommerce.I know that Google will try to sort it out by itself (if View All) is available but helping them with a canonical will solve a lot of 404 crawls on our site.
Any ideas?Joost
-
Hi Joost
Did you happen to take a look at SEO Guide to Google Webmaster Recommendations for Pagination? There are some great tips in there that can help you implement this.
Also, View-all in search results & 5 common mistakes with rel=canonical from Google also has some tips.
Hope these help a bit! Let me know if you have any questions or comments! Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content Issues with Pagination
Hi Moz Community, We're an eCommerce site so we have a lot of pagination issues but we were able to fix them using the rel=next and rel=prev tags. However, our pages have an option to view 60 items or 180 items at a time. This is now causing duplicate content problems when for example page 2 of the 180 item view is the same as page 4 of the 60 item view. (URL examples below) Wondering if we should just add a canonical tag going to the the main view all page to every page in the paginated series to get ride of this issue. https://www.example.com/gifts/for-the-couple?view=all&n=180&p=2 https://www.example.com/gifts/for-the-couple?view=all&n=60&p=4 Thoughts, ideas or suggestions are welcome. Thanks
Technical SEO | Jul 10, 2017, 12:31 PM | znotes0 -
Handling of Duplicate Content
I just recently signed and joined the moz.com system. During the initial report for our web site it shows we have lots of duplicate content. The web site is real estate based and we are loading IDX listings from other brokerages into our site. If though these listings look alike, they are not. Each has their own photos, description and addresses. So why are they appear as duplicates – I would assume that they are all too closely related. Lots for Sale primarily – and it looks like lazy agents have 4 or 5 lots and input the description the same. Unfortunately for us, part of the IDX agreement is that you cannot pick and choose which listings to load and you cannot change the content. You are either all in or you cannot use the system. How should one manage duplicate content like this? Or should we ignore it? Out of 1500+ listings on our web site it shows 40 of them are duplicates.
Technical SEO | Apr 26, 2015, 10:21 PM | TIM_DOTCOM0 -
The Mysterious Case of Pagination, Canonical Tags
Hey guys, My head explodes when I think of this problem. So I will leave it to you guys to find a solution... My root domain (xxx.com) runs on WordPress platform. I use Yoast SEO plugin. The next page of root domain -- page/2/ -- has been canonicalized to the same page -- page/2/ points to page/2/ for example. The page/2/ and remaining pages also have this rel tags: I have also added "noindex,follow" to page/2/ and further -- Yoast does this automatically. Note: Yoast plugin also adds canonical to page/2/...page/3/ automatically. Same is the case with category pages and tag pages. Oh, and the author pages too -- they all have self-canonicalization, rel prev & rel next tags, and have been "noindex, followed." Problem: Am I doing this the way it should be done? I asked a Google Webmaster employee on rel next and prev tags, and this is what she said: "We do not recommend noindexing later pages, nor rel="canonical"izing everything to the first page." (My bad, last year I was canonicalizing pages to first page). One of the popular blog, a competitor, uses none of these tags. Yet they rank higher. Others following this format have been hit with every kind of Google algorithm I could think of. I want to leave it to Google to decide what's better, but then again, Yoast SEO plugin rules my blog -- okay, let's say I am a bad coder. Any help, suggestions, and thoughts are highly appreciated. 🙂 Update 1: Paginated pages -- including category pages and tag pages -- have unique snippets; no full-length posts. Thought I'd make that clear.
Technical SEO | Jan 26, 2013, 11:50 AM | sidstar0 -
We have set up 301 redirects for pages from an old domain, but they aren't working and we are having duplicate content problems - Can you help?
We have several old domains. One is http://www.ccisound.com - Our "real" site is http://www.ccisolutions.com The 301 redirect from the old domain to the new domain works. However, the 301-redirects for interior pages, like: http://www.ccisolund.com/StoreFront/category/cd-duplicators do not work. This URL should redirect to http://www.ccisolutions.com/StoreFront/category/cd-duplicators but as you can see it does not. Our IT director supplied me with this code from the HT Access file in hopes that someone can help point us in the right direction and suggest how we might fix the problem: RewriteCond%{HTTP_HOST} ccisound.com$ [NC] RewriteRule^(.*)$ http://www.ccisolutions.com/$1 [R=301,L] Any ideas on why the 301 redirect isn't happening? Thanks all!
Technical SEO | Aug 14, 2012, 8:52 PM | danatanseo0 -
Duplicate content and http and https
Within my Moz crawl report, I have a ton of duplicate content caused by identical pages due to identical pages of http and https URL's. For example: http://www.bigcompany.com/accomodations https://www.bigcompany.com/accomodations The strange thing is that 99% of these URL's are not sensitive in nature and do not require any security features. No credit card information, booking, or carts. The web developer cannot explain where these extra URL's came from or provide any further information. Advice or suggestions are welcome! How do I solve this issue? THANKS MOZZERS
Technical SEO | May 28, 2018, 11:47 PM | hawkvt10 -
Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
The page in question receives a lot of quality traffic but is only relevant to a small percent of my users. I want to keep the link juice received from this page but I do not want it to appear in the SERPs.
Technical SEO | May 5, 2014, 1:01 PM | surveygizmo0 -
Duplicate Content issue
I have been asked to review an old website to an identify opportunities for increasing search engine traffic. Whilst reviewing the site I came across a strange loop. On each page there is a link to printer friendly version: http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes That page also has a link to a printer friendly version http://www.websitename.co.uk/index.php?pageid=7&printfriendly=yes&printfriendly=yes and so on and so on....... Some of these pages are being included in Google's index. I appreciate that this can't be a good thing, however, I am not 100% sure as to the extent to which it is a bad thing and the priority that should be given to getting it sorted. Just wandering what views people have on the issues this may cause?
Technical SEO | Jun 15, 2011, 9:01 AM | CPLDistribution0 -
How to set up a rel canonical in big commmerce?
I have no clue how to set this up in the Bigcommerce store platform
Technical SEO | Nov 7, 2012, 4:28 PM | Firestarter-SEO0