Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Rel="canonical" and rel="alternate" both necessary?
-
We are fighting some duplicate content issues across multiple domains. We have a few magento stores that have different country codes. For example: domain.com and domain.ca, domain.com is the "main" domain.
We have set up different rel="alternative codes like:
The question is, do we need to add custom rel="canonical" tags to domain.ca that points to domain.com?
For example for domain.ca/product.html to point to:
Also how far does rel="canonical" follow? For example if we have:
domain.ca/sub/product.html canonical to domain.com/sub/product.html
then,
domain.com/sub/product.html canonical to domain.com/product.html -
I'm honestly not completely clear on what the different URLs are for - I'd just add a note to keep the core difference between canonical and 301s in mind. A canonical tag only impacts Google, and eventually, search results. A 301 impacts all visitors (and moves them to the other page). A lot of people get hung up on the SEO side, but the two methods are very different for end-users.
As Tom said, if these variations have no user value, you could consolidate them altogether with 301s. I always hesitate to suggest it without in-depth knowledge of the site, though, because I've seen people run off and do something dangerous.
-
What's the purpose of the URL if there's not even any sorting or anything unique going on? If's a sorted URL (say by "size" smallest-largest for /little leage/ URL) it might be actually useful to develop some unique category content to let the page rank separately.
If the content is totally unique, I don't think you could really go wrong redirecting. To be safe, I'd probably rely on analytics to answer the question "what impact will redirection have?" For instance, is there a difference in conversion rate between the URLs. If you see a conversion bump from a more specific URL, you might want to sleuth out what's causing it.
-
Would you worry about it if the categories are somewhat useful for users to drill down the content?
For example:
/product.html
/aluminum-baseball-bats/product.html
/little-league-baseball-bats/product.htmlThey don't sell bats but it is the easiest way to describe it I guess. In this cause would you still 301 redirect the two longer urls to /product.html
-
Yes, providing that the /category1/ and /category2/ heirarchy doesn't help the user experience (e.g. product segmentation based on say, color and brand, which would be useful for users to drill down to).
I like 301s better because they are permanent, non-ambiguous, respected by all engines, and chiefly because they eliminate the possibilty of inlink dillution because the redirected URLs are never seeen.
-
Yeah, don't use rel=canonical for the same purpose as rel=alternate - the canonical tag will override the alternate/lang tag and may cause your alternate versions to rank incorrectly or not at all. It can be a bit unpredictable. If you only wanted one version to show up in search results, then rel=canonical would be ok, but rel=alternate is a softer signal to help Google rank the right page in the right situation. It's not perfect, but that's the intent.
As for multiple canonicals like what you described, that's essential like chaining 301-redirects. As much as possible, avoid it - you'll lose link equity, and Google may just not honor them in some cases. There's no hard/fast limit, and two levels may be ok in some cases, but I think it's just a recipe for trouble long-term. Fix the canonicals to be single-hop wherever possible.
-
Thanks that is what I was thinking, I just need to know more about if the bots will follow the canonical's past one level when pointing to a different domain and if so how many levels on the different sites.
-
Interesting idea, I might have to do that. Right now I have canonical elements on the .com
It is a magento store so it creates dirty duplicate content when the products are in different categories out of the box, for example magento creates the following product pages:
domain.com/store/productcategory1/product.html
domain.com/store/productcategory2/product.html
domain.com/store/product.htmlIn this case I have canonical elements pointing the categories to the main root domain.com/store/product.html
So you think it would be better to do a 301 redirect for the different product urls that are in subcategories?
-
Miles,
On your last question, I'm wondering if those two canonical tags are necessary? Are the /sub/ versions of those pages necessary for user experience? If not, I'd add a canonical element to the .com version, then redirect the /sub/product.html to /product.html. That would help you avoid splitting link authority.
-
Hey Miles,
The both are for different uses and may or may not be used in the same page depending on your situation.
If the content in the CA and COM versions is the same, then you should add a rel canonical + rel alternate, the rel alternate pointing to itself and the other version of it, and the canonical pointing to the one you consider definitive.
If the content isn't the same, then the rel canonical isn't needed (but suggested, pointing to itself in each lang/alternate), only the alternate should be in place.
You can read more on Dr. Pete's post here: http://moz.com/blog/rel-confused-answers-to-your-rel-canonical-questions
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
"Avoid Too Many Internal Links" when you have a mega menu
Using the on-page grader and whilst further investigating internal linking, I'm concerned that as the ecommerce website has a very link heavy mega menu the rule of 100 may be impeding on the contextual links we're creating. Clearly we don't want to no-follow our entire menu. Should we consider no-indexing the third-level- for example short sleeve shirts here... Clothing > Shirts > Short Sleeve Shirts What about other pages we're don't care to index anyway such as the 'login page' the 'cart' the search button? Any thoughts appreciated.
Intermediate & Advanced SEO | | Ant-Scarborough0 -
Optimization for "Search by Photos" feature
Howdy, fellow mozzers, Does anyone know what affects a given company photos show up in the "Search by Photos" section? I can't find any decent info.. Here is the link to SEL, describing the feature (not even google themselves seem to have an announcement about it). https://searchengineland.com/google-showing-mobile-search-by-photos-option-in-selected-local-verticals-323237 Thanks in advance!
Intermediate & Advanced SEO | | DmitriiK2 -
Google indexed "Lorem Ipsum" content on an unfinished website
Hi guys. So I recently created a new WordPress site and started developing the homepage. I completely forgot to disallow robots to prevent Google from indexing it and the homepage of my site got quickly indexed with all the Lorem ipsum and some plagiarized content from sites of my competitors. What do I do now? I’m afraid that this might spoil my SEO strategy and devalue my site in the eyes of Google from the very beginning. Should I ask Google to remove the homepage using the removal tool in Google Webmaster Tools and ask it to recrawl the page after adding the unique content? Thank you so much for your replies.
Intermediate & Advanced SEO | | Ibis150 -
Using "nofollow" internally can help with crawl budget?
Hello everyone. I was reading this article on semrush.com, published the last year, and I'd like to know your thoughts about it: https://www.semrush.com/blog/does-google-crawl-relnofollow-at-all/ Is that really the case? I thought that Google crawls and "follows" nofollowed tagged links even though doesn't pass any PR to the destination link. If instead Google really doesn't crawl internal links tagged as "nofollow", can that really help with crawl budget?
Intermediate & Advanced SEO | | fablau0 -
Dilemma about "images" folder in robots.txt
Hi, Hope you're doing well. I am sure, you guys must be aware that Google has updated their webmaster technical guidelines saying that users should allow access to their css files and java-scripts file if it's possible. Used to be that Google would render the web pages only text based. Now it claims that it can read the css and java-scripts. According to their own terms, not allowing access to the css files can result in sub-optimal rankings. "Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings."http://googlewebmastercentral.blogspot.com/2014/10/updating-our-technical-webmaster.htmlWe have allowed access to our CSS files. and Google bot, is seeing our webapges more like a normal user would do. (tested it in GWT)Anyhow, this is my dilemma. I am sure lot of other users might be facing the same situation. Like any other e commerce companies/websites.. we have lot of images. Used to be that our css files were inside our images folder, so I have allowed access to that. Here's the robots.txt --> http://www.modbargains.com/robots.txtRight now we are blocking images folder, as it is very huge, very heavy, and some of the images are very high res. The reason we are blocking that is because we feel that Google bot might spend almost all of its time trying to crawl that "images" folder only, that it might not have enough time to crawl other important pages. Not to mention, a very heavy server load on Google's and ours. we do have good high quality original pictures. We feel that we are losing potential rankings since we are blocking images. I was thinking to allow ONLY google-image bot, access to it. But I still feel that google might spend lot of time doing that. **I was wondering if Google makes a decision saying, hey let me spend 10 minutes for google image bot, and let me spend 20 minutes for google-mobile bot etc.. or something like that.. , or does it have separate "time spending" allocations for all of it's bot types. I want to unblock the images folder, for now only the google image bot, but at the same time, I fear that it might drastically hamper indexing of our important pages, as I mentioned before, because of having tons & tons of images, and Google spending enough time already just to crawl that folder.**Any advice? recommendations? suggestions? technical guidance? Plan of action? Pretty sure I answered my own question, but I need a confirmation from an Expert, if I am right, saying that allow only Google image access to my images folder. Sincerely,Shaleen Shah
Intermediate & Advanced SEO | | Modbargains1 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Does rel=canonical fix duplicate page titles?
I implemented rel=canonical on our pages which helped a lot, but my latest Moz crawl is still showing lots of duplicate page titles (2,000+). There are other ways to get to this page (depending on what feature you clicked, it will have a different URL) but will have the same page title. Does having rel=canonical in place fix the duplicate page title problem, or do I need to change something else? I was under the impression that the canonical tag would address this by telling the crawler which URL was the URL and the crawler would only use that one for the page title.
Intermediate & Advanced SEO | | askotzko0 -
Posing QU's on Google Variables "aclk", "gclid" "cd", "/aclk" "/search", "/url" etc
I've been doing a bit of stats research prompted by read the recent ranking blog http://www.seomoz.org/blog/gettings-rankings-into-ga-using-custom-variables There are a few things that have come up in my research that I'd like to clear up. The below analysis has been done on my "conversions". 1/. What does "/aclk" mean in the Referrer URL? I have noticed a strong correlation between this and "gclid" in the landing page variable. Does it mean "ad click" ?? Although they seem to "closely" correlate they don't exactly, so when I have /aclk in the referrer Url MOSTLY I have gclid in the landing page URL. BUT not always, and the same applies vice versa. It's pretty vital that I know what is the best way to monitor adwords PPC, so what is the best variable to go on? - Currently I am using "gclid", but I have about 25% extra referral URL's with /aclk in that dont have "gclid" in - so am I underestimating my number of PPC conversions? 2/. The use of the variable "cd" is great, but it is not always present. I have noticed that 99% of my google "Referrer URL's" either start with:
Intermediate & Advanced SEO | | James77
/aclk - No cd value
/search - No cd value
/url - Always contains the cd variable. What do I make of this?? Thanks for the help in advance!0