Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Rel="canonical" and rel="alternate" both necessary?
-
We are fighting some duplicate content issues across multiple domains. We have a few magento stores that have different country codes. For example: domain.com and domain.ca, domain.com is the "main" domain.
We have set up different rel="alternative codes like:
The question is, do we need to add custom rel="canonical" tags to domain.ca that points to domain.com?
For example for domain.ca/product.html to point to:
Also how far does rel="canonical" follow? For example if we have:
domain.ca/sub/product.html canonical to domain.com/sub/product.html
then,
domain.com/sub/product.html canonical to domain.com/product.html -
I'm honestly not completely clear on what the different URLs are for - I'd just add a note to keep the core difference between canonical and 301s in mind. A canonical tag only impacts Google, and eventually, search results. A 301 impacts all visitors (and moves them to the other page). A lot of people get hung up on the SEO side, but the two methods are very different for end-users.
As Tom said, if these variations have no user value, you could consolidate them altogether with 301s. I always hesitate to suggest it without in-depth knowledge of the site, though, because I've seen people run off and do something dangerous.
-
What's the purpose of the URL if there's not even any sorting or anything unique going on? If's a sorted URL (say by "size" smallest-largest for /little leage/ URL) it might be actually useful to develop some unique category content to let the page rank separately.
If the content is totally unique, I don't think you could really go wrong redirecting. To be safe, I'd probably rely on analytics to answer the question "what impact will redirection have?" For instance, is there a difference in conversion rate between the URLs. If you see a conversion bump from a more specific URL, you might want to sleuth out what's causing it.
-
Would you worry about it if the categories are somewhat useful for users to drill down the content?
For example:
/product.html
/aluminum-baseball-bats/product.html
/little-league-baseball-bats/product.htmlThey don't sell bats but it is the easiest way to describe it I guess. In this cause would you still 301 redirect the two longer urls to /product.html
-
Yes, providing that the /category1/ and /category2/ heirarchy doesn't help the user experience (e.g. product segmentation based on say, color and brand, which would be useful for users to drill down to).
I like 301s better because they are permanent, non-ambiguous, respected by all engines, and chiefly because they eliminate the possibilty of inlink dillution because the redirected URLs are never seeen.
-
Yeah, don't use rel=canonical for the same purpose as rel=alternate - the canonical tag will override the alternate/lang tag and may cause your alternate versions to rank incorrectly or not at all. It can be a bit unpredictable. If you only wanted one version to show up in search results, then rel=canonical would be ok, but rel=alternate is a softer signal to help Google rank the right page in the right situation. It's not perfect, but that's the intent.
As for multiple canonicals like what you described, that's essential like chaining 301-redirects. As much as possible, avoid it - you'll lose link equity, and Google may just not honor them in some cases. There's no hard/fast limit, and two levels may be ok in some cases, but I think it's just a recipe for trouble long-term. Fix the canonicals to be single-hop wherever possible.
-
Thanks that is what I was thinking, I just need to know more about if the bots will follow the canonical's past one level when pointing to a different domain and if so how many levels on the different sites.
-
Interesting idea, I might have to do that. Right now I have canonical elements on the .com
It is a magento store so it creates dirty duplicate content when the products are in different categories out of the box, for example magento creates the following product pages:
domain.com/store/productcategory1/product.html
domain.com/store/productcategory2/product.html
domain.com/store/product.htmlIn this case I have canonical elements pointing the categories to the main root domain.com/store/product.html
So you think it would be better to do a 301 redirect for the different product urls that are in subcategories?
-
Miles,
On your last question, I'm wondering if those two canonical tags are necessary? Are the /sub/ versions of those pages necessary for user experience? If not, I'd add a canonical element to the .com version, then redirect the /sub/product.html to /product.html. That would help you avoid splitting link authority.
-
Hey Miles,
The both are for different uses and may or may not be used in the same page depending on your situation.
If the content in the CA and COM versions is the same, then you should add a rel canonical + rel alternate, the rel alternate pointing to itself and the other version of it, and the canonical pointing to the one you consider definitive.
If the content isn't the same, then the rel canonical isn't needed (but suggested, pointing to itself in each lang/alternate), only the alternate should be in place.
You can read more on Dr. Pete's post here: http://moz.com/blog/rel-confused-answers-to-your-rel-canonical-questions
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google indexed "Lorem Ipsum" content on an unfinished website
Hi guys. So I recently created a new WordPress site and started developing the homepage. I completely forgot to disallow robots to prevent Google from indexing it and the homepage of my site got quickly indexed with all the Lorem ipsum and some plagiarized content from sites of my competitors. What do I do now? I’m afraid that this might spoil my SEO strategy and devalue my site in the eyes of Google from the very beginning. Should I ask Google to remove the homepage using the removal tool in Google Webmaster Tools and ask it to recrawl the page after adding the unique content? Thank you so much for your replies.
Intermediate & Advanced SEO | | Ibis150 -
Does redirecting from a "bad" domain "infect" the new domain?
Hi all, So a complicated question that requires a little background. I bought unseenjapan.com to serve as a legitimate news site about a year ago. Social media and content growth has been good. Unfortunately, one thing I didn't realize when I bought this domain was that it used to be a porn site. I've managed to muck out some of the damage already - primarily, I got major vendors like Macafee and OpenDNS to remove the "porn" categorization, which has unblocked the site at most schools & locations w/ public wifi. The sticky bit, however, is Google. Google has the domain filtered under SafeSearch, which means we're losing - and will continue to lose - a ton of organic traffic. I'm trying to figure out how to deal with this, and appeal the decision. Unfortunately, Google's Reconsideration Request form currently doesn't work unless your site has an existing manual action against it (mine does not). I've also heard such requests, even if I did figure out how to make them, often just get ignored for months on end. Now, I have a back up plan. I've registered unseen-japan.com, and I could just move my domain over to the new domain if I can't get this issue resolved. It would allow me to be on a domain with a clean history while not having to change my brand. But if I do that, and I set up 301 redirects from the former domain, will it simply cause the new domain to be perceived as an "adult" domain by Google? I.e., will the former URL's bad reputation carry over to the new one? I haven't made a decision one way or the other yet, so any insights are appreciated.
Intermediate & Advanced SEO | | gaiaslastlaugh0 -
Can Google read content that is hidden under a "Read More" area?
For example, when a person first lands on a given page, they see a collapsed paragraph but if they want to gather more information they press the "read more" and it expands to reveal the full paragraph. Does Google crawl the full paragraph or just the shortened version? In the same vein, what if you have a text box that contains three different tabs. For example, you're selling a product that has a text box with overview, instructions & ingredients tabs all housed under the same URL. Does Google crawl all three tabs? Thanks for your insight!
Intermediate & Advanced SEO | | jlo76130 -
Using hreflang="en" instead of hreflang="en-gb"
Hello, I have a question in regard to international SEO and the hreflang meta tag. We are currently a B2B business in the UK. Our major market is England with some exceptions of sales internationally. We are wanting to increase our ranking into other english speaking countries and regions such as Ireland and the Channel Islands. My research has found regional google search engines for Ireland (google.ie), Jersey (google.je) and Guernsey (google.gg). Now, all the regions have English as one their main language and here is my questions. Because I use hreflang=“en-gb” as my site language, am I regional excluding these countries and islands? If I used hreflang=“en” would it include these english speaking regions and possible increase the ranking on these the regional search engines? Thank you,
Intermediate & Advanced SEO | | SilverStar11 -
Dilemma about "images" folder in robots.txt
Hi, Hope you're doing well. I am sure, you guys must be aware that Google has updated their webmaster technical guidelines saying that users should allow access to their css files and java-scripts file if it's possible. Used to be that Google would render the web pages only text based. Now it claims that it can read the css and java-scripts. According to their own terms, not allowing access to the css files can result in sub-optimal rankings. "Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings."http://googlewebmastercentral.blogspot.com/2014/10/updating-our-technical-webmaster.htmlWe have allowed access to our CSS files. and Google bot, is seeing our webapges more like a normal user would do. (tested it in GWT)Anyhow, this is my dilemma. I am sure lot of other users might be facing the same situation. Like any other e commerce companies/websites.. we have lot of images. Used to be that our css files were inside our images folder, so I have allowed access to that. Here's the robots.txt --> http://www.modbargains.com/robots.txtRight now we are blocking images folder, as it is very huge, very heavy, and some of the images are very high res. The reason we are blocking that is because we feel that Google bot might spend almost all of its time trying to crawl that "images" folder only, that it might not have enough time to crawl other important pages. Not to mention, a very heavy server load on Google's and ours. we do have good high quality original pictures. We feel that we are losing potential rankings since we are blocking images. I was thinking to allow ONLY google-image bot, access to it. But I still feel that google might spend lot of time doing that. **I was wondering if Google makes a decision saying, hey let me spend 10 minutes for google image bot, and let me spend 20 minutes for google-mobile bot etc.. or something like that.. , or does it have separate "time spending" allocations for all of it's bot types. I want to unblock the images folder, for now only the google image bot, but at the same time, I fear that it might drastically hamper indexing of our important pages, as I mentioned before, because of having tons & tons of images, and Google spending enough time already just to crawl that folder.**Any advice? recommendations? suggestions? technical guidance? Plan of action? Pretty sure I answered my own question, but I need a confirmation from an Expert, if I am right, saying that allow only Google image access to my images folder. Sincerely,Shaleen Shah
Intermediate & Advanced SEO | | Modbargains1 -
Using the Word "Free" in Metadata
Hi Forum! I've searched previous questions, and couldn't find anything related to this. I know the word "free" when used in email marketing can trigger spam filters. If I use the word "free" in my metadata (title tag, description, and keywords just for fun) will I be penalized in any way? Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
Does rel=canonical fix duplicate page titles?
I implemented rel=canonical on our pages which helped a lot, but my latest Moz crawl is still showing lots of duplicate page titles (2,000+). There are other ways to get to this page (depending on what feature you clicked, it will have a different URL) but will have the same page title. Does having rel=canonical in place fix the duplicate page title problem, or do I need to change something else? I was under the impression that the canonical tag would address this by telling the crawler which URL was the URL and the crawler would only use that one for the page title.
Intermediate & Advanced SEO | | askotzko0 -
How Rel=Prev & Rel=Next work for me?
I have implemented Rel=Prev & Rel=Next tag on my website. I would like to give example URL to know more about it. http://www.vistapatioumbrellas.com/market-umbrellas?limit=40&p=3 http://www.vistapatioumbrellas.com/market-umbrellas?limit=40&p=4 http://www.vistapatioumbrellas.com/market-umbrellas?limit=40&p=5 Right now, I have blocked paginated pages by Robots.txt by following query. Disallow: /*?p= I have removed disallow syntax from Robots.txt for paginated pages. But, I have confusion with duplicate page title. If you will check all 3 pages so you will find out duplicate page title across all pages. I know that, duplicate page title is harmful for SEO. Will Google crawl + index all paginated pages? If yes so which page will get maximum benefits in organic ranking? Is there any specific way which may help me to solve this issue?
Intermediate & Advanced SEO | | CommercePundit0