Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Rel="canonical" and rel="alternate" both necessary?
-
We are fighting some duplicate content issues across multiple domains. We have a few magento stores that have different country codes. For example: domain.com and domain.ca, domain.com is the "main" domain.
We have set up different rel="alternative codes like:
The question is, do we need to add custom rel="canonical" tags to domain.ca that points to domain.com?
For example for domain.ca/product.html to point to:
Also how far does rel="canonical" follow? For example if we have:
domain.ca/sub/product.html canonical to domain.com/sub/product.html
then,
domain.com/sub/product.html canonical to domain.com/product.html -
I'm honestly not completely clear on what the different URLs are for - I'd just add a note to keep the core difference between canonical and 301s in mind. A canonical tag only impacts Google, and eventually, search results. A 301 impacts all visitors (and moves them to the other page). A lot of people get hung up on the SEO side, but the two methods are very different for end-users.
As Tom said, if these variations have no user value, you could consolidate them altogether with 301s. I always hesitate to suggest it without in-depth knowledge of the site, though, because I've seen people run off and do something dangerous.
-
What's the purpose of the URL if there's not even any sorting or anything unique going on? If's a sorted URL (say by "size" smallest-largest for /little leage/ URL) it might be actually useful to develop some unique category content to let the page rank separately.
If the content is totally unique, I don't think you could really go wrong redirecting. To be safe, I'd probably rely on analytics to answer the question "what impact will redirection have?" For instance, is there a difference in conversion rate between the URLs. If you see a conversion bump from a more specific URL, you might want to sleuth out what's causing it.
-
Would you worry about it if the categories are somewhat useful for users to drill down the content?
For example:
/product.html
/aluminum-baseball-bats/product.html
/little-league-baseball-bats/product.htmlThey don't sell bats but it is the easiest way to describe it I guess. In this cause would you still 301 redirect the two longer urls to /product.html
-
Yes, providing that the /category1/ and /category2/ heirarchy doesn't help the user experience (e.g. product segmentation based on say, color and brand, which would be useful for users to drill down to).
I like 301s better because they are permanent, non-ambiguous, respected by all engines, and chiefly because they eliminate the possibilty of inlink dillution because the redirected URLs are never seeen.
-
Yeah, don't use rel=canonical for the same purpose as rel=alternate - the canonical tag will override the alternate/lang tag and may cause your alternate versions to rank incorrectly or not at all. It can be a bit unpredictable. If you only wanted one version to show up in search results, then rel=canonical would be ok, but rel=alternate is a softer signal to help Google rank the right page in the right situation. It's not perfect, but that's the intent.
As for multiple canonicals like what you described, that's essential like chaining 301-redirects. As much as possible, avoid it - you'll lose link equity, and Google may just not honor them in some cases. There's no hard/fast limit, and two levels may be ok in some cases, but I think it's just a recipe for trouble long-term. Fix the canonicals to be single-hop wherever possible.
-
Thanks that is what I was thinking, I just need to know more about if the bots will follow the canonical's past one level when pointing to a different domain and if so how many levels on the different sites.
-
Interesting idea, I might have to do that. Right now I have canonical elements on the .com
It is a magento store so it creates dirty duplicate content when the products are in different categories out of the box, for example magento creates the following product pages:
domain.com/store/productcategory1/product.html
domain.com/store/productcategory2/product.html
domain.com/store/product.htmlIn this case I have canonical elements pointing the categories to the main root domain.com/store/product.html
So you think it would be better to do a 301 redirect for the different product urls that are in subcategories?
-
Miles,
On your last question, I'm wondering if those two canonical tags are necessary? Are the /sub/ versions of those pages necessary for user experience? If not, I'd add a canonical element to the .com version, then redirect the /sub/product.html to /product.html. That would help you avoid splitting link authority.
-
Hey Miles,
The both are for different uses and may or may not be used in the same page depending on your situation.
If the content in the CA and COM versions is the same, then you should add a rel canonical + rel alternate, the rel alternate pointing to itself and the other version of it, and the canonical pointing to the one you consider definitive.
If the content isn't the same, then the rel canonical isn't needed (but suggested, pointing to itself in each lang/alternate), only the alternate should be in place.
You can read more on Dr. Pete's post here: http://moz.com/blog/rel-confused-answers-to-your-rel-canonical-questions
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can an "Event" in Structured Data For Google Be A Webinar?
I have a client who is has structured data for live business webinars. Google's documentation seems to talk more about music and tickets than this kind of thing. At the same time, we get an error in search console for "Name" and location, which they list as "webinar." Should I removed this failed structured data attempt or is there a way to fix it? Thanks!
Intermediate & Advanced SEO | | 945010 -
Switching from HTTP to HTTPS: 301 redirect or keep both & rel canonical?
Hey Mozzers, I'll be moving several sites from HTTP to HTTPS in the coming weeks (same brand, multiple ccTLDs). We'll start on a low traffic site and test it for 2-4 weeks to see the impact before rolling out across all 8 sites. Ideally, I'd like to simply 301 redirect the HTTP version page to the HTTPS version of the page (to get that potential SEO rankings boost). However, I'm concerned about the potential drop in rankings, links and traffic. I'm thinking of alternative ways and so instead of the 301 redirect approach, I would keep both sites live and accessible, and then add rel canonical on the HTTPS pages to point towards HTTP so that Google keeps the current pages/ links/ indexed as they are today (in this case, HTTPS is more UX than for SEO). Has anyone tried the rel canonical approach, and if so, what were the results? Do you recommend it? Also, for those who have implemented HTTPS, how long did it take for Google to index those pages over the older HTTP pages?
Intermediate & Advanced SEO | | Steven_Macdonald0 -
Dilemma about "images" folder in robots.txt
Hi, Hope you're doing well. I am sure, you guys must be aware that Google has updated their webmaster technical guidelines saying that users should allow access to their css files and java-scripts file if it's possible. Used to be that Google would render the web pages only text based. Now it claims that it can read the css and java-scripts. According to their own terms, not allowing access to the css files can result in sub-optimal rankings. "Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings."http://googlewebmastercentral.blogspot.com/2014/10/updating-our-technical-webmaster.htmlWe have allowed access to our CSS files. and Google bot, is seeing our webapges more like a normal user would do. (tested it in GWT)Anyhow, this is my dilemma. I am sure lot of other users might be facing the same situation. Like any other e commerce companies/websites.. we have lot of images. Used to be that our css files were inside our images folder, so I have allowed access to that. Here's the robots.txt --> http://www.modbargains.com/robots.txtRight now we are blocking images folder, as it is very huge, very heavy, and some of the images are very high res. The reason we are blocking that is because we feel that Google bot might spend almost all of its time trying to crawl that "images" folder only, that it might not have enough time to crawl other important pages. Not to mention, a very heavy server load on Google's and ours. we do have good high quality original pictures. We feel that we are losing potential rankings since we are blocking images. I was thinking to allow ONLY google-image bot, access to it. But I still feel that google might spend lot of time doing that. **I was wondering if Google makes a decision saying, hey let me spend 10 minutes for google image bot, and let me spend 20 minutes for google-mobile bot etc.. or something like that.. , or does it have separate "time spending" allocations for all of it's bot types. I want to unblock the images folder, for now only the google image bot, but at the same time, I fear that it might drastically hamper indexing of our important pages, as I mentioned before, because of having tons & tons of images, and Google spending enough time already just to crawl that folder.**Any advice? recommendations? suggestions? technical guidance? Plan of action? Pretty sure I answered my own question, but I need a confirmation from an Expert, if I am right, saying that allow only Google image access to my images folder. Sincerely,Shaleen Shah
Intermediate & Advanced SEO | | Modbargains1 -
After reading of Google's so called "over-optimization" penalty, is there a penalty for changing title tags too frequently?
In other words, does title tag change frequency hurt SEO ? After changing my title tags, I have noticed a steep decline in impressions, but an increase in CTR and rankings. I'd like to once again change the title tags to try and regain impressions. Is there any penalty for changing title tags too often? From SEO forums online, there seems to be a bit of confusion on this subject...
Intermediate & Advanced SEO | | Felix_LLC0 -
Should I use rel=canonical on similar product pages.
I'm thinking of using rel=canonical for similar products on my site. Say I'm selling pens and they are al very similar. I.e. a big pen in blue, a pack of 5 blue bic pens, a pack of 10, 50, 100 etc. should I rel=canonical them all to the best seller as its almost impossible to make the pages unique. (I realise the best I realise these should be attributes and not products but I'm sure you get my point) It seems sensible to have one master canonical page for bic pens on a site that has a great description video content and good images plus linked articles etc rather than loads of duplicate looking pages. love to hear thoughts from the Moz community.
Intermediate & Advanced SEO | | mark_baird0 -
Can too many "noindex" pages compared to "index" pages be a problem?
Hello, I have a question for you: our website virtualsheetmusic.com includes thousands of product pages, and due to Panda penalties in the past, we have no-indexed most of the product pages hoping in a sort of recovery (not yet seen though!). So, currently we have about 4,000 "index" page compared to about 80,000 "noindex" pages. Now, we plan to add additional 100,000 new product pages from a new publisher to offer our customers more music choice, and these new pages will still be marked as "noindex, follow". At the end of the integration process, we will end up having something like 180,000 "noindex, follow" pages compared to about 4,000 "index, follow" pages. Here is my question: can this huge discrepancy between 180,000 "noindex" pages and 4,000 "index" pages be a problem? Can this kind of scenario have or cause any negative effect on our current natural SEs profile? or is this something that doesn't actually matter? Any thoughts on this issue are very welcome. Thank you! Fabrizio
Intermediate & Advanced SEO | | fablau0 -
What is the best way to optimize/setup a teaser "coming soon" page for a new product launch?
Within the context of a physical product launch what are some ideas around creating a /coming-soon page that "teases" the launch. Ideally I'd like to optimize a page around the product, but the client wants to try build consumer anticipation without giving too many details away. Any thoughts?
Intermediate & Advanced SEO | | GSI0 -
How Google treat internal links with rel="nofollow"?
Today, I was reading about NoFollow on Wikipedia. Following statement is over my head and not able to understand with proper manner. "Google states that their engine takes "nofollow" literally and does not "follow" the link at all. However, experiments conducted by SEOs show conflicting results. These studies reveal that Google does follow the link, but does not index the linked-to page, unless it was in Google's index already for other reasons (such as other, non-nofollow links that point to the page)." It's all about indexing and ranking for specific keywords for hyperlink text during external links. I aware about that section. It may not generate in relevant result during any keyword on Google web search. But, what about internal links? I have defined rel="nofollow" attribute on too many internal links. I have archive blog post of Randfish with same subject. I read following question over there. Q. Does Google recommend the use of nofollow internally as a positive method for controlling the flow of internal link love? [In 2007] A: Yes – webmasters can feel free to use nofollow internally to help tell Googlebot which pages they want to receive link juice from other pages
Intermediate & Advanced SEO | | CommercePundit
_
(Matt's precise words were: The nofollow attribute is just a mechanism that gives webmasters the ability to modify PageRank flow at link-level granularity. Plenty of other mechanisms would also work (e.g. a link through a page that is robot.txt'ed out), but nofollow on individual links is simpler for some folks to use. There's no stigma to using nofollow, even on your own internal links; for Google, nofollow'ed links are dropped out of our link graph; we don't even use such links for discovery. By the way, the nofollow meta tag does that same thing, but at a page level.) Matt has given excellent answer on following question. [In 2011] Q: Should internal links use rel="nofollow"? A:Matt said: "I don't know how to make it more concrete than that." I use nofollow for each internal link that points to an internal page that has the meta name="robots" content="noindex" tag. Why should I waste Googlebot's ressources and those of my server if in the end the target must not be indexed? As far as I can say and since years, this does not cause any problems at all. For internal page anchors (links with the hash mark in front like "#top", the answer is "no", of course. I am still using nofollow attributes on my website. So, what is current trend? Will it require to use nofollow attribute for internal pages?0