Rel=Canonical vs. No Index
-
Ok, this is a long winded one. We're going to spell out what we've seen, then give a few questions to answer below, so please bear with us!
We have websites with products listed on them and are looking for guidance on whether to use rel=canonical or some version of No Index for our filtered product listing pages. We work with a couple different website providers and have seen both strategies used.
Right now, one of our web providers uses No Index, No Follow tags and Moz alerted us to the high frequency of these tags. We want to make sure our internal linking structure is sound and we are worried that blocking these filtered pages is keeping our product pages from being as relevant as they could be. We've seen recommendations to use No Index, Follow tags instead, but our other web provider uses a different method altogether.
Another vendor uses a rel=canonical strategy which we've also seen when researching Nike and Amazon's sites. Because these are industry leading sites, we're wondering if we should get rid of the No Index tags completely and switch to the canonical strategy for our internal links. On that same provider's sites, we've found rel=canonical tags used after the first page of our product listings, and we've seen recommendations to use rel=prev and rel=next instead.
With all that being said, we have three questions:
1)Which strategy (rel=canonical vs. No Index) do you recommend as being optimal for website crawlers and boosting our site relevance?
2)If we should be using some version of No Index, should we use Follow or No Follow?
2)Depending on the product, we have multiple pages of products for each category. Should we use rel=prev & rel=next instead of rel=canonical among the pages after page one?
Thanks in advance!
-
Oleg, I like your thought process on this.
I am dealing with this exact issue and have 2 brilliant minds arguing over what is best approach. In reviewing the above, I agree with the approach. Canonical links to the first page of "Honda-civic-coupe" makes perfect sense.
Total we use prev-next, but self-refer rel=canonical the URL's on subsequent pages, but are not no-indexing page 2+. The negative impact is that Google will from time to time, add as site-links to the #1 search result a pagination page (e.g., 6 ) and some pagination pages are indexed. Landing page traffic to these is near zero. Our decision is determining whether to non-index or rel-canonical to the first page.
The pages in my case are new home communities where we might be listing all the different communities that are luxury communities in the specific city. While they are all this same category, as a group can be described similarly, and will have near duplicate metas, each community (list element) is unique. So, page #1 can be viewed as quite differentiated.
Here are the arguments:
-
Rel=canonical to the first page. As much as we think each shingle (i.e., page of 15 communities) is unique. The 15 Descriptions, amenities, location, what it is near, things you can do there are unique, As a group it can be considered just a list of communities. By pointing back to page #1 we are saying this is a collecting list of 3 pages of luxury communities in a given city. This will concentrate authority to the page that is most relevant.
-
No-index the subsequent pages. When Google said near duplicate, they really were considering limiting that scope to pages where the items are exactly the same or nearly the same. If the individual page content due to the differentiated product can be seen as unique content simply due to the in-page list elements, they are not really duplicate and rel=canonical is inappropriate. To use rel=canonical would at some point be viewed as manipulative and over-reaching use of rel=canonical. While this may cause this page to rank better, it may be considered not okay at some point.
Option #1 would seem to have a better immediate rank impact, but is there some real risk that it would be considered manipulative since the pages would not look to Google as near enough duplicates?
Glad to hear what you or others have to say.
-
-
Hey Oleg,
Thanks for the input - we'll look into making those updates!
-
Yes, you would canonical to that searchnew.aspx page.
In this scenario, I would set up mod_rewrite to create "Category" page for each specific model so you can rank for more pages.
e.g /model/Honda-Civic-Coupe/ would be a static page and you can canonical all of the other filters to their respective pages.
-
Hey Everyone,
Thanks for the answers and advice - here's an example of a filtered inventory listings page on one of our sites that isn't currently using a rel=canonical on it. Would you just have the canonical point back to the main "searchnew" page? If you have any other insights to improvements to this page's structure, please feel free to send suggestions.
http://www.leithhonda.com/searchnew.aspx?model=Civic+Coupe
Thanks all!
-
I would say using rel canonical would be the best. I am guessing your filter system is using a anchor or a hashbang? We only do ecommerce work and we typically just have the canonical of the filter page pointed to the category that is being filtered. The reason being is that you don't want to reduce the chances of the category ranking in the serps.
But honestly like Oleg said, the site would need to be seen to give a 100% best possible answer. We have used several different strategies with our clients. Some involve actually rewriting the filter urls as landing pages and trying to rank them as well.
-
Hey Oleg,
Thanks for the response. We're actually looking for info on our product listings pages, or search results pages within the site. Would this advice still apply to those pages?
-
Hard to give answer without seeing the site... ideally, you don't use canonicals or noindex and instead have 1 page per product.
-
Canonical is better overall i'd say - as long as the two pages you are merging are (almost) identical
-
keep the follow, doesn't hurt and only boosts pages it links to
-
Again, tough to understand but sounds like you should use canonical (pagination basically "merges" the paginated pages into 1 long one so to speak, so if you have the same content over and over again, best to canonical)
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
PDF best practices: to get them indexed or not? Do they pass SEO value to the site?
All PDFs have landing pages, and the pages are already indexed. If we allow the PDFs to get indexed, then they'd be downloadable directly from google's results page and we would not get GA events. The PDFs info would somewhat overlap with the landing pages info. Also, if we ever need to move content, we'd now have to redirects the links to the PDFs. What are best practices in this area? To index or not? What do you / your clients do and why? Would a PDF indexed by google and downloaded directly via a link in the SER page pass SEO juice to the domain? What if it's on a subdomain, like when hosted by Pardot? (www1.example.com)
Reporting & Analytics | | hlwebdev1 -
Why only a few pages of my website are being indexed by google
Our website www.navisyachts.com has in its sitemap over 3000 pages of information, and this is all unique content written by our team. Now Google Webmaster central shows only 100 urls indexed from 3500 submitted. Can you help me understand why and how I can fix this issue? The website has 4 years old, is a Joomla 3.3 up to date. It has part of the content in the Joomla core content systems and part in K2. Thank you. Pablo
Reporting & Analytics | | FWC_SEO0 -
Why google stubbornly keeps indexing my http urls instead of the https ones?
I moved everything to https in November, but there are plenty of pages which are still indexed by google as http instead of https, and I am wondering why. Example: http://www.gomme-auto.it/pneumatici/barum correctly redirect permanently to https://www.gomme-auto.it/pneumatici/barum Nevertheless if you search for pneumatici barum: https://www.google.it/search?q=pneumatici+barum&oq=pneumatici+barum The third organic result listed is still http. Since we moved to https google crawler visited that page tens of time, last one two days ago. But doesn't seems to care to update the protocol in google index. Anyone knows why? My concern is when I use API like semrush and ahrefs I have to do it twice to try both http and https, for a total of around 65k urls I waste a lot of my quota.
Reporting & Analytics | | max.favilli0 -
Switch to www from non www preference negatively hit # pages indexed
I have a client whose site did not use the www preference but rather the non www form of the url. We were having trouble seeing some high quality inlinks and I wondered if the redirect to the non www site from the links was making it hard for us to track. After some reading, it seemed we should be using the www version for better SEO anyway so I made a change on Monday but had a major hit to the number of pages being indexed by Thursday. Freaking me out mildly. What are people's thoughts? I think I should roll back the www change asap - or am I jumping the gun?
Reporting & Analytics | | BrigitteMN0 -
Major practices which helps to index pages by google.
Actually, We have submitted more than 100 pages in to google through xml sitemap. But, we see in that 75% of the pages where indexed by google. Note : Excluding the duplicate pages
Reporting & Analytics | | Webworld_Norway0 -
Moz Rank & Trust | Page vs Sub vs Root
Hey guys, Just need some help deciphering my OSE link metrics for my site theskimonster.com . Page MozRank: 5.51 (highest among my competitors) Page MozTrust: 5.74 (#2 among my competitors) Subdomain MozRank: 4.19 (#4 among my competitors) Subdomain MozTrust: 4.63 (#2 among my competitors) Root Domain MozRank: 3.89 (#5 or last place among competitors) Root Domain MozRank: 4.1 (#5 or last place among competitors) What does this mean? What am I doing right, what do I need to do?
Reporting & Analytics | | Theskimonster1 -
Bing Won't Index Site - Help!
For the past few weeks I’ve been trying to figure out why my client's site is not indexed on bing and yahoo search engines. My Google analytics is telling me I’m getting traffic (very little traffic) from Bing almost daily but Bing webmaster tools is telling me I’ve received no traffic and no pages have been indexed into Bing since the beginning of December. At once point I was showing ranking in Bing for only one keyword then all of a sudden none of my pages were being indexed and I now rank for nothing for that website. From Google I’m getting over 1200 visits per month. I have been doing everything I can to possibly find the culprit behind this issue. I feel like the issue could be a redirect problem. In webmaster tools on Bing I’ve used “Fetch as Bingbot” and every time I use it I get a Status of “Redirection limit reached.”. I also checked the CRAWL Information and it’s saying all the URL’s to the site are under 301 redirect. A month or so ago the site was completely revamped and the canonical URL was changed from non www to www. I have tried manually adding pages to be indexed multiple times and Bing will not index any of the sites pages. I have submitted the sitemap to Bing and I am now at a loss. I don’t know what’s going on and why I can’t get the site listed on Bing. Any suggestions would be greatly appreciated. Thanks,
Reporting & Analytics | | VITALBGS
Stephen0 -
Bing Vs Yahoo Rankings
I was under the impression that Bing powers yahoo search. However, I am recieving a large difference in ranking on some keywords in the two engines. Can anyone ask what or why this is happening? Thanks, Brandon
Reporting & Analytics | | GCSMasone0