Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Should my canonical tags point to the category page or the filter result page?
-
Hi Moz,
I'm working on an ecommerce site with categories, filter options, and sort options – teacherexpress.scholastic.com.
Should I have canonical tags from all filter and sort options point to the category page like gap.com and llbean.com? or have all sort options point to the filtered page URL like kohls.com?
I was under the impression that to use a canonical tag, the pages have to have the same content, meaning that Gap and L.L. Bean would be using canonical tags incorrectly. Using a filter changes the content, whereas using a sort option just changes the order.
What would be the best way to deal with duplicate content for this site?
Thanks for reading!
-
Hi Daniel,
You've gotten some good responses to your question. Do you have any additional questions or comments you would like to add?
-
I agree, that's a great approach. I think you mean Javascript, not Java though (that's a different language). The only thing that might make this approach a challenge would be if you had so much product data before filtering that it caused a performance problem, i.e. let's say you had 50 pages of results...if you filter server-side, you're only sending down 1 page of results, whereas if you're filtering with client-side Javascript, you've got to send all 50 pages down and then filter it in the browser.
-
Hi Daniel,
Another option may be use java on your filter page so that however customers filter the product, the URL will remain the same with extra parameters in the URL to filter out the products. I find this the best way as you have the same URL for all sort of customization/filter and able to avoid duplicate content.
For example: Macys
-
Hi Daniel,
You're going to have to walk a fine line between having a page for every possible combination of filtered results that a user might search for AND appearing to have a ton of pages that are really almost identical....and suffering the wrath of Panda upon seeing what it thinks is duplicate content.
The easy way out is to have 1 page for each category, and no matter what filters are applied, rel=canonical to that category. Dupe content problem solved.
So why isn't this the ideal solution?
#1 You may be missing out on targeting combinations of categories and filters that users will commonly search for. Let's say you were selling clothing, and a category was shirts, and you had a filter for men/women/boys/girls. By making all shirts list pages rel=canonical to the overall shirts list page (with no filters), you'd be missing an opportunity to target "boys shirts".
#2 You may be missing opportunities to pour more link juice to the individual product pages. It's unclear (to me, anyway) whether Google adds the link juice from all pages rel=canonical'ed to a page, or whether Google simply treats rel=canonical as "oh ya, I've already seen & dealt with this page". Certainly in my testing I've seen places where pages rel=canonical'ed to another page actually still show up in the search results, so I'd say rel=canonical isn't as solid as a 301.
So what do you do? I'd recommend a mix. Figure out what combinations you think you can get search traffic from, and find a way to break down the complete set of combinations of filters and categories to target those, and to rel=canonical every page to one of your targeted pages.
It's entirely possible (likely, even) that you'll end up with a mix. For instance, going back to my earlier example, let's say you had another filter that was, let's say, price range. You might want to target "boys shirts", but not "boys shirts under $20". So, while "boys" was a filter value, and "under $20" was a filter value, you might rel=canonical all pages in the category "boys" with a filter value of "shirts" to your page that has just that category and that 1 filter set, regardless of setting of the price filter.
Clear as monkey poop?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is it necessary to have unique H1's for pages in a pagination series (i.e. blog)?
A content issue that we're experiencing includes duplicate H1 issues within pages in a pagination series (i.e. blog). Does each separate page within the pagination need a unique H1 tag, or, since each page has unique content (different blog snippets on each page), is it safe to disregard this? Any insight would be appreciated. Thanks!
Algorithm Updates | | BopDesign0 -
Can 'Jump link'/'Anchor tag' urls rank in Google for keywords?
E.g. www.website.com/page/#keyword-anchor-text Where the part after the # is a section of the page you can jump to, and the title of that section is a secondary keyword you want the page to rank for?
Algorithm Updates | | rwat0 -
Footer menu links: Header tags or list items?
Hi, I would like to know header tags (h5 or h6) or list items ( ) works better for footer menu links for the best linking structure. Thanks
Algorithm Updates | | vtmoz1 -
US domain pages showing up in Google UK SERP
Hi, Our website which was predominantly for UK market was setup with a .com extension and only two years ago other domains were added - US (.us) , IE (.ie), EU (.eu) & AU (.com.au) Last year in July, we noticed that few .us domain urls were showing up in UK SERPs and we realized the sitemap for .us site was incorrectly referring to UK (.com) so we corrected that and the .us domain urls stopped appearing in the SERP. Not sure if this actually fixed the issue or was such coincidental. However in last couple of weeks more than 3 .us domain urls are showing for each brand search made on Google UK and sometimes it replaces the .com results all together. I have double checked the PA for US pages, they are far below the UK ones. Has anyone noticed similar behaviour &/or could anyone please help me troubleshoot this issue? Thanks in advance, R
Algorithm Updates | | RaksG0 -
Does using parent pages in WordPress help with SEO and/or indexing for SERPs?
I have a law office and we handle four different practice areas. I used to have multiple websites (one for each practice area) with keywords in the actual domain name, but based on the recommendation of SEO "experts" a few years ago, I consolidated all the webpages into one single webpage (based on the rumors at the time that Google was going to be focusing on authorship and branding in the future, rather than keywords in URLs or titles). Needless to say, Google authorship was dropped a year or two later and "branding" never took off. Overall, having one webpage is convenient and generally makes SEO easier, but there's been a huge drawback: When my page comes up in SERPs after searching for "attorney" or "lawyer" combined with a specific practice area, the practice area landing pages don't typically come up in the SERPs, only the front page comes up. It's as if Google recognizes that I have some decent content, and Google knows that I specialize in multiple practice areas, but it directs everyone to the front page only. Prospective clients don't like this and it causes my bounce rate to be high. They like to land on a page focusing on the practice area they searched for. Two questions: (1) Would using parent pages (e.g. http://lawfirm.com/divorce/anytown-usa-attorney-lawyer/ vs. http://lawfirm.com/anytown-usa-divorce-attorney-lawyer/) be better for SEO? The research I've done up to this point appears to indicate "no." It doesn't make much difference as long as the keywords are in the domain name and/or URL. But I'd be interested to hear contrary opinions. (2) Would using parent pages (e.g. http://lawfirm.com/divorce/anytown-usa-attorney-lawyer/ vs. http://lawfirm.com/anytown-usa-divorce-attorney-lawyer/) be better for indexing in Google SERPs? For example, would it make it more likely that someone searching for "anytown usa divorce attorney" would actually end up in the divorce section of the website rather than the front page?
Algorithm Updates | | micromano0 -
Not sure whether to update existing page or create a new one
Hello guys, So far I have found this Q&A to be very helpful. I have a couple of product pages on my website which rank very low in the search results. This was because in the past they were not at all optimized in terms of title, description, etc. I would like to now optimize these pages, get some inbound links to them, etc. I'm not sure whether to do this with the existing product pages or create new ones. I'm afraid that if I optimize the current pages, the low ranking from before will carry over. Would it be better to start fresh? Thank you, Pravin
Algorithm Updates | | goforgreen0 -
Stop google indexing CDN pages
Just when I thought I'd seen it all, google hits me with another nasty surprise! I have a CDN to deliver images, js and css to visitors around the world. I have no links to static HTML pages on the site, as far as I can tell, but someone else may have - perhaps a scraper site? Google has decided the static pages they were able to access through the CDN have more value than my real pages, and they seem to be slowly replacing my pages in the index with the static pages. Anyone got an idea on how to stop that? Obviously, I have no access to the static area, because it is in the CDN, so there is no way I know of that I can have a robots file there. It could be that I have to trash the CDN and change it to only allow the image directory, and maybe set up a separate CDN subdomain for content that only contains the JS and CSS? Have you seen this problem and beat it? (Of course the next thing is Roger might look at google results and start crawling them too, LOL) P.S. The reason I am not asking this question in the google forums is that others have asked this question many times and nobody at google has bothered to answer, over the past 5 months, and nobody who did try, gave an answer that was remotely useful. So I'm not really hopeful of anyone here having a solution either, but I expect this is my best bet because you guys are always willing to try.
Algorithm Updates | | loopyal0 -
Home page replaced by subpage in google SERP (good or bad)
SInce Panda, We have seen our home page drop from #2 in google.ie serp to page 3 but it has been replaced in the same position @#2 by our relevent sub page for the keyword that we ranked#2 for. Is this a good or bad thing from and seo point of view and is it better to have deep pages show in serp rather than the homepage of a site and what is the best line of action from here in relation to seo. Is it best to work on subpage or home page for that keyword and should link building for that phrase be directed towards the subpage or the homepage as the subpage is obviously more relevent in googles eyes for the search term. It is clear that all areas of the site should be looked at in relation to link building and deep links etc but now that google is obviously looking at relevancy very closely should all campaigns be sectioned into relevent content managed sections and the site likewise and treated on an individual basis. Any help that you may have would be very welcome. Paul
Algorithm Updates | | mcintyr0