Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Factors that affect Google.com vs .ca
-
Though my company is based in Canada, we have a .com URL, we're hosted on servers in the U.S., and most of our customers are in the U.S. Our marketing efforts are focused on the U.S. Heck, we even drop the "u" in "colour" and "favour"!
Nonetheless we rank very well in Google.ca, and rather poorly on Google.com.
One hypothesis is that we have more backlinks from .ca domains than .com, but I don't believe that to be true. For sure, the highest quality links we have come from .coms like NYTimes.com.
Any suggestions on how we can improve the .com rankings, other than keeping on with the link building?
-
Thanks for letting us know how things worked out Aspirant.
Andy
-
Final verdict:
I took the plunge. Even though our product is geography agnostic, I changed our Webmaster Tools setting to "U.S."
Sure enough, we immediately saw some improvements in the google.COM rankings. Not much of an impact on .CA, and any loss here was definitely made up in the new .COM traffic.
I'll be doing a deeper dive into the data later.
Thanks everyone.
-
Hey Rob,
I have a bit of exp with this - had a Canadian based site that wanted to target the states. We were ranking well for .CA and not so good in .COM. I actually did this in WMT for a site - set geo-targetting to USA - and after a week or so started noticing a huge jump in .COm for a lot of keywords. What was great was that the rankings in .CA stayed consistent.
The only drop I noticed was in the .CA (Canada Only) searches. These completely dropped off the map. But normal searches in google.ca were fine.Don't know if this will always happen, but this is my experience.
-
I had exactly the same with a spanish site of mine .es for a long time i was first in google.com but knowhere to be found in google.es . Everybody kept telling me that this was not because i had a lot of .com link and none where .es But when time passed without any link changes the keywords aked well in google.es . So is it maybe the case the some countries are just a few months behind?
-
I have noticed that getting links from the appropriate TLD extension really determines where you rank on each google serps for the individual country.
you can search for sites related to yours for the specific TLD by putting inurl:.com in google along with your keywords.
the same thing works for all other extensions.
this makes finding .edu link opportunities a breeze for example
Besides link building you will want to make sure on webmaster tools you have set your targeted country to the country you want to rank best for. For example I have a site about college students which I've set to target the US since Canada mostly calls post secondary education University and College so the audience is split much more.
Hope this helps.
-
Sorry, I meant David Mihm -- oops!
-
I suspect having the settings in WMT set for the USA "might" hurt your performance in other areas, however the small company website (that gets 90% of its business from the USA) I mentioned in my prior response has the setting set to USA and it ranks #3 for it's main search term in both .ca and .com. Having claimed a Local Places account might also be an issue. I'd suggest you contact either Todd Mihm (http://www.davidmihm.com/blog) or Mike Blumenthal (http://blumenthals.com/blog) for an answer to that question.
-
Thanks for the answer. A couple of questions come to mind:
Won't setting our Google Webmaster Tools to United States hurt our performance in other parts of the world? So far I've made a point of ensuring that Webmaster Tools has us as not geo-specific ("Target users in: unlisted", on the Site Configuration > Settings screen of Webmaster Tools).
Also (on the advice of another SEO advisor) we verified our Google Places location, so is there a risk of sending mixed signals to Google and getting hurt by that?
-
The competition is usually stronger in the USA (.com) arena than in Canada (.ca). I have a little company site (with little work done in the way of SEO) that ranks #3 in both .ca and .com for "wheelchair trays". You may want to adjust your settings on Google WebMasterTools to ensure your site is set to United States rather than Canada. As David Kauzlaric has mentioned, you will definitely benefit from having more links from US based sites - I'd focus on that as a first step.
-
Still no breakthroughs on this issue. Our performance keeps improving on .ca and .com, which is obviously good, but our ranking on .com is always very, very far behind our .ca performance.
It's still a mystery to me, given that most of the inbound links are from U.S.-based, .com websites.
The only answer that works in my mind is that .ca uses a different algorithm. But I'm still very interested in hearing other thoughts!
Thanks,
Rob
-
Hi Rob,
Have you seen any changes with your rankings on Google.ca and Google.com? Do you have any other questions or comments you can add to help others that may be in a similar situation?
Here's hoping you got to enjoy two long weekends in a row from both countries!
-
Agree.
We did a link building campaign for a german website (dot de) and most of the links were from .com websites. They started to rank very well on google.com and google.de had only minor impacts. Is clear that the links should be from the same country zone if you want to rank in that particular area.
You should focus on links from .com domain - but that should be easier then building links from .ca.
You should also get a google maps account with your US location - if you have one. That alone should bring up your results in the US.
-
It's a pretty well known fact that non-US versions of Google are not using the same algorithm and therefore are "behind". This could be the case where you are employing methods that a couple years ago were effective and are working well for .CA but on .COM not as well.
The biggest thing you can do is work on high quality content and build links. Remember, linking is somewhere around 70% of the algorithm alone. Work on getting more .COM authoritative links from sites like NYT, USAToday, etc...
Also, if a good portion of your links are from .CA, that very well could affect it too!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is possible to submit a XML sitemap to Google without using Google Search Console?
We have a client that will not grant us access to their Google Search Console (don't ask us why). Is there anyway possible to submit a XML sitemap to Google without using GSC? Thanks
Intermediate & Advanced SEO | | RosemaryB0 -
My site shows 503 error to Google bot, but can see the site fine. Not indexing in Google. Help
Hi, This site is not indexed on Google at all. http://www.thethreehorseshoespub.co.uk Looking into it, it seems to be giving a 503 error to the google bot. I can see the site I have checked source code Checked robots Did have a sitemap param. but removed it for testing GWMT is showing 'unreachable' if I submit a site map or fetch Any ideas on how to remove this error? Many thanks in advance
Intermediate & Advanced SEO | | SolveWebMedia0 -
Prevent Google from crawling Ajax
With Google figuring out how to make Ajax and JS more searchable/indexable, I am curious on thoughts or techniques to prevent this. Here's my Situation, we have a page that we do not ever want to be indexed/crawled or other. Currently we have the nofollow/noindex command, but due to technical changes for our site the method in which this information is being implemented if it is ever displayed it will not have the ability to block the content from search. It is also the decision of the business to not list the file in robots.txt due to the sensitivity of the content. Basically, this content doesn't exist unless something super important happens, and even if something super important happens, we do not want Google to know of its existence. Since the Dev team is planning on using Ajax/JS to pull in this content if the business turns it on, the concern is that it will be on the homepage and Google could index it. So the questions that I was asked; if Google can/does index, how long would that piece of content potentially appear in the SERPs? Can we block Google from caring about and indexing this section of content on the homepage? Sorry for the vagueness of this question, it's very sensitive in nature and I am trying to avoid too many specifics. I am able to discuss this in a more private way if necessary. Thanks!
Intermediate & Advanced SEO | | Shawn_Huber0 -
Pages are Indexed but not Cached by Google. Why?
Here's an example: I get a 404 error for this: http://webcache.googleusercontent.com/search?q=cache:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all But a search for qjamba restaurant coupons gives a clear result as does this: site:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all What is going on? How can this page be indexed but not in the Google cache? I should make clear that the page is not showing up with any kind of error in webmaster tools, and Google has been crawling pages just fine. This particular page was fetched by Google yesterday with no problems, and even crawled again twice today by Google Yet, no cache.
Intermediate & Advanced SEO | | friendoffood2 -
Are backlinks the most important factor in SEO?
I have had an agency state that "Backlinks are the most important factor in SEO". That is how they are justifying their strategy of approaching bloggers. I believe there are a lot more factors than that including Target Market definition, Keyword identification an build content based on these factors. What's everyone's thoughts?
Intermediate & Advanced SEO | | AndySalmons0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Page position dropped on Google
Hey Guys, My web designer has recommended this forum to use, the reason being: my google position has been dropped from page 1 to page 10 in the last week. The site is weloveschoolsigns.co.uk, but our main business site is textstyles.co.uk the school signs are a product of text styles. I have been told off my SEO company, that because I have changed the school logo to the text styles logo, Google have penalised me for it, and dropped us from page 1 for numerous keywords, to page 10 or more. They have also said that duplicate content within the school site http://www.weloveschoolsigns.co.uk/school-signs-made-easy/ has also a contributed to the drop in positions. (this content is not on the textstyles site) Lastly they said, that having the same telephone number is a definate no no. They said that I have been penalised, because google see the above as trying to monopolise on the market. I don’t know if all this is true, as the SEO is way above my head, but they have quoted me £1250 to repair all the errors, when the site only cost £750. They have also mentioned that because of the above changes, the main text styles site will also be punished. Any thoughts on this matter would be much appreciated as I don't know whether to pay them to crack on, or accept the new positions. Either way I'm very confused. Thanks Thomas
Intermediate & Advanced SEO | | TextStylesUK0 -
Do widgets and gadgets affect SEO?
I have added a number of widgets and gadgets to my site that I suspect act like Iframes. If true do these widgets and gadgets and the content that they are linked to help or hurt my site from an SEO perspective? Examples are facebook gadget, wordpress blidget, weather gadget, google maps widget.
Intermediate & Advanced SEO | | casper4340