Capital Letters in URLS?
-
Remove
-
Having Capital letters in the URLs are not bad for SEO, Google consider this case as negative seo and it will not affect your ranking, but i recommend to use lower case in URL because it is User-friendly and SE friendly, and may be possible that you will have duplicate content issue if search engine see variations of upper and lower case among URLs that all evidently point to the same content. Read matt cutts's advices on URL http://www.seosean.com/blog/matt-cutts-advice-on-urls-page-names
-
I agree with Neil. It's not bad, just a good user practice to keep them lowercase so that's there's no confusion. The best bet for you would to be to use a consistent format and mimic that in your canonical URLs so only that variation gets crawled and indexed.
-
Whilst it's not necessarily "bad" per se, the implications are, so this kind of canonicalisation issue needs to be taken care of using URL rewrites/permanent 301 redirects.
Typically, on a Windows-based server (without any URL rewriting), a 200 (OK) status code will be returned for each version regardless of the combination of upper/lower-case letters used - giving search engines duplicate content to index, and others duplicate content to link to. This naturally dilutes rankings and link equity across the two (or more) identical pages.
There is an excellent section on solving canonicalisation issues on Windows IIS servers in this SEOmoz article by Dave Sottimano.
On a Linux server (without any URL rewriting) you will usually get a 200 for the lower-case version, and a 404 (Not Found) for versions with upper-case characters. Whilst search engines wont index the 404, you are potentially wasting link equity passed to non-existent pages, and it can be really confusing for users, too.
There is a lot of info around the web about solving Linux canonicalisation issues (here is an article from YouMoz). If your site uses a CMS like Joomla or Wordpress, most of these issues are solved using the default .htaccess file, and completely eliminated when you combine this with a well chosen extension or two.
You can help the search engines figure out which version of a page you regard as the original by using the rel="canonical" meta tag in the html . This passes link equity and rankings from duplicate versions to the main, absolute version.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Has anyone seen any research regarding URL structure correlating/impacting rank brain results?
We are currently writing some "rank brain-friendly" content and were wondering if anyone had seen or conducted research on best URL structure practices. Any insights would be appreciated! Thanks, Zach
Algorithm Updates | | Chris-2417530 -
How Additional Characters and Numbers in URL affect SEO
Hi fellow SEOmozers, I noticed that a lot of websites have additional characters and words at the end of the URL in addition keyword optimized URL. Mostly for E-Commerce sites For example: www.yoursite.com/category/keyword?id=12345&Keyword--Category--cm_jdkfls_dklj or wwww.yoursite.com/category/keyword#83939=-37292 My question is how does the additional characters or parameters(not necessarily tracking parameters) affect SEO? Does it matter if i have additional keywords in the additional stuff in the URL (1st url example)? If you can provide more information, that would be helpful. Thank you!
Algorithm Updates | | TommyTan0 -
Does having a few URLs pointing to another url via 301 "create" duplicate content?
Hello! I have a few URLs all related to the same business sector. Can I point them all at my home domain or should I point them to different relevant content within it? Ioan
Algorithm Updates | | IoanSaid1 -
Increased 404 and Blocked URL Notifications in Webmaster Tools
In the last 45 days, I am receiving an increasing number of 404 alerts in Google Webmaster Tools. When I audit the notifications, they are not "new" broken links, these are all links that have been pointing to non-existent pages for years that for some reason Google is just notifying me about them. This has also coincided with about a 30% drop in organic traffic from late April to early May. The site is www.petersons.com and its been around for a while and the site attracts a fair amount of natural links so in the 2 years I've managed the campaign I've done very little link-building. I'm in the process of setting up redirects for these urls but why is Google now notifying me of years old broken links and could that be one of the reasons for my drop in traffic. My second issue is my I am being notified that I am blocking over 8,000 urls in my Robots file when I am not. I attached a screenshot. Here is a link to a screenshot. http://i.imgur.com/ncoERgV.jpg
Algorithm Updates | | CUnet0 -
Organic Traffic dropped 50%. Anyone want to have a stab at why? (URL listed)
Just curious what the pro's on here think is the reason why our site got hammered recently. The URL is www.jobshadow.com. We've got gobs of quality content that had been ranking for quite a few keywords. Even one from Rand himself http://www.jobshadow.com/interview-with-seo-and-seomoz-founder-rand-fishkin/ Rankings for even the exact match domain keyword 'Job Shadow' have been pummeled. Anyway, we've got a pretty solid link profile I would think. We also have a very high user time on the site, thus suggesting the organic traffic was engaged when Google ranked us for those keywords. We have lots of unsolicited inbound links and even recent ones from PBS. I'm not really sure what it takes to please the "machine" at this point. Curious as to what everyone here thinks.
Algorithm Updates | | arkana0 -
Should We Switch from Several Exact Match URLs to Subdomains Instead?
We are a company with one product customized for different vertical markets. Our sites are each setup on their own unique domains:
Algorithm Updates | | contactatonce
contactatonce.com (Brand)
autodealerchat.com (Auto Vertical)
apartmentchat.com (Apartment Vertical)
chatforrealestate.com (Real Estate Vertical) We currently rank well on the respective keyword niches including:
- auto dealer chat (exact match), automotive chat, dealer chat
- apartment chat (exact match), property chat, multifamilly chat
- chat for real estate (exact match), real estate chat To simplify the user experience we are considering moving to a single domain and subdomain structure: contactatonce.com
auto.contactatonce.com
apartment.contactatonce.com
realestate.contactatonce.com QUESTIONS:
1. Considering current Google ranking strategies, do we stand to lose keyword related traffic by making this switch?
2. Are there specific examples you can point to where an individual domain and subdomains each ranked high on Google across a variety of different niches? (I'm not talking about Wikipedia, Blogger, Blogspot, Wordpress, Yahoo Answers, etc. which are in their own class, but a small to mid size brand). Thank you,
Aaron0 -
Similar URLs
If I have two similar urls: www.investormill.com/unemployment-rate and www.investormill.com/unemployment-rate-annual Would this confuse search engines or "cannibalize" my content? For clarity: the first page would provide data on the monthly unemployment rate, the second would provide an annual unemployment rate figure. So, there would be a unique series on each page. Just trying to figure out how to best approach this when crafting urls. Thanks for your help!
Algorithm Updates | | investormill0 -
Is URL appearance defined by crawling or by XML sitemap
I am having a problem developing a sitemap because I have long URLs that are made by zend. They go like this: http://myagingfolks.com/professionals/20661/social-workers/pennsylvania-civi-stanger Because these URL's are long and are fed by Zend when I try to call them all up, to put on the sitemap, the system runs out of memory and crashes. Do you know what part of a search result, in google, say, comes from the URL? Would it be fine for me to submit to google only www.myagingfolks.com/professionals/20661. Does the crawler find that the URL is indeed http://myagingfolks.com/professionals/20661/social-workers/pennsylvania-civi-stanger or does it go with just what the sitemap tells it?
Algorithm Updates | | Jordanrg0