Spaces in URL line
-
Hi Gurus,
I recently made the mistake of putting a space into a URL line between two words that make up my primary key word. Think
www.example.com/Jelly Donuts/mmmNice.php
instead of
www.example.com/JellyDonuts/mmmNice.php
This mistake now needed fixing to www.example.com/Jelly Donuts/mmmNice.php to pass W3, but has been in place for a while but most articles/documents under 'Jelly Donuts' are not ranking well (which is probably the obvious outcome of the mistake).
I am wondering whether the best solution from an SEO ranking viewpoint is to:
1. Change the article directory immediately to www.example.com/JellyDonuts/mmmNice.php and rel=canonical each article to the new correct URL. Take out the 'trash' using robots.txt
or to
301 www.example.com/Jelly Donut to the www.example.com/JellyDonut directory?
or perhaps something else?
Thanks in advance for your help with this sticky (but tasty) conundrum,
Brad
-
I would use a dash "-" Jelly-Donut and 301 to that page. Essentially only use a canonical when there is a direct reason for the similar content to be there like a dynamic database search and a "similar; SEO version. Or Pagination.
Give the Jelly%20Donuts/mmmNice.php will no longer be needed and an exact duplicate of its new location a 301 is best here.
-Phil
-
I agree with Webdeal, go with the 301
-
Hi BM7,
I would go with 301
Some more information for you - http://www.seomoz.org/learn-seo/redirection
Cheers
Dmitriy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to fix duplicate URLs from Embedded Twitter Feed
We have an embedded twitter feed on our blog and we're seeing that every time we create a new post, a twitter url is created, thus resulting in duplicate content. Example:
Intermediate & Advanced SEO | | annie.graupner
Blog post URL: http://domain.com/blog-title
Duplicate URL automatically created: http://domain.com/blog-title/www.twitter.com/handlename What is the best way to fix this?0 -
Change Media Wiki urls to - instead of _
Is there a way to do this? I have a new wiki set up at http://hiddentriforce.com/zelda-wiki/index.php/Main_Page I want to change the default underscores to hyphens. This is what I have been trying Options +FollowSymLinks
Intermediate & Advanced SEO | | Atomicx
RewriteEngine On
RewriteBase / RewriteRule !.(html|php)$ - [S=4]
RewriteRule ^([^]*)([^]*)([^]*)([^]*)(.)$ $1-$2-$3-$4-$5 [E=uscor:Yes]
RewriteRule ^([^_])([^])([^])(.*)$ $1-$2-$3-$4 [E=uscor:Yes]
RewriteRule ^([^])([^])(.*)$ $1-$2-$3 [E=uscor:Yes]
RewriteRule ^([^])_(.)$ $1-$2 [E=uscor:Yes] RewriteCond %{ENV:uscor} ^Yes$
RewriteRule (.*) http://hiddentriforce.com/zelda-wiki/index.php/$1 [R=301,L]0 -
How long does google index old urls?
Hey guys, We are currently in the process of redesigning a site but in two phases as the timeline issues. So there will be up to a 4 week gap between the 1st and 2nd set of redirects. These urls will be idle 4 weeks before the phase content is ready. What effect if any will this have on the domain and page authority? Thanks Rob
Intermediate & Advanced SEO | | daracreative0 -
Google: How to See URLs Blocked by Robots?
Google Webmaster Tools says we have 17K out of 34K URLs that are blocked by our Robots.txt file. How can I see the URLs that are being blocked? Here's our Robots.txt file. User-agent: * Disallow: /swish.cgi Disallow: /demo Disallow: /reviews/review.php/new/ Disallow: /cgi-audiobooksonline/sb/order.cgi Disallow: /cgi-audiobooksonline/sb/productsearch.cgi Disallow: /cgi-audiobooksonline/sb/billing.cgi Disallow: /cgi-audiobooksonline/sb/inv.cgi Disallow: /cgi-audiobooksonline/sb/new_options.cgi Disallow: /cgi-audiobooksonline/sb/registration.cgi Disallow: /cgi-audiobooksonline/sb/tellfriend.cgi Disallow: /*?gdftrk Sitemap: http://www.audiobooksonline.com/google-sitemap.xml
Intermediate & Advanced SEO | | lbohen0 -
Indexed non existent pages, problem appeared after we 301d the url/index to the url.
I recently read that if a site has 2 pages that are live such as: http://www.url.com/index and http://www.url.com/ will come up as duplicate if they are both live... I read that it's best to 301 redirect the http://www.url.com/index and http://www.url.com/. I read that this helps avoid duplicate content and keep all the link juice on one page. We did the 301 for one of our clients and we got about 20,000 errors that did not exist. The errors are of pages that are indexed but do not exist on the server. We are assuming that these indexed (nonexistent) pages are somehow linked to the http://www.url.com/index The links are showing 200 OK. We took off the 301 redirect from the http://www.url.com/index page however now we still have 2 exaact pages, www.url.com/index and http://www.url.com/. What is the best way to solve this issue?
Intermediate & Advanced SEO | | Bryan_Loconto0 -
How to deal with old, indexed hashbang URLs?
I inherited a site that used to be in Flash and used hashbang URLs (i.e. www.example.com/#!page-name-here). We're now off of Flash and have a "normal" URL structure that looks something like this: www.example.com/page-name-here Here's the problem: Google still has thousands of the old hashbang (#!) URLs in its index. These URLs still work because the web server doesn't actually read anything that comes after the hash. So, when the web server sees this URL www.example.com/#!page-name-here, it basically renders this page www.example.com/# while keeping the full URL structure intact (www.example.com/#!page-name-here). Hopefully, that makes sense. So, in Google you'll see this URL indexed (www.example.com/#!page-name-here), but if you click it you essentially are taken to our homepage content (even though the URL isn't exactly the canonical homepage URL...which s/b www.example.com/). My big fear here is a duplicate content penalty for our homepage. Essentially, I'm afraid that Google is seeing thousands of versions of our homepage. Even though the hashbang URLs are different, the content (ie. title, meta descrip, page content) is exactly the same for all of them. Obviously, this is a typical SEO no-no. And, I've recently seen the homepage drop like a rock for a search of our brand name which has ranked #1 for months. Now, admittedly we've made a bunch of changes during this whole site migration, but this #! URL problem just bothers me. I think it could be a major cause of our homepage tanking for brand queries. So, why not just 301 redirect all of the #! URLs? Well, the server won't accept traditional 301s for the #! URLs because the # seems to screw everything up (server doesn't acknowledge what comes after the #). I "think" our only option here is to try and add some 301 redirects via Javascript. Yeah, I know that spiders have a love/hate (well, mostly hate) relationship w/ Javascript, but I think that's our only resort.....unless, someone here has a better way? If you've dealt with hashbang URLs before, I'd LOVE to hear your advice on how to deal w/ this issue. Best, -G
Intermediate & Advanced SEO | | Celts180 -
URL Structure for Directory Site
We have a directory that we're building and we're not sure if we should try to make each page an extension of the root domain or utilize sub-directories as users narrow down their selection. What is the best practice here for maximizing your SERP authority? Choice #1 - Hyphenated Architecture (no sub-folders): State Page /state/ City Page /city-state/ Business Page /business-city-state/
Intermediate & Advanced SEO | | knowyourbank
4) Location Page /locationname-city-state/ or.... Choice #2 - Using sub-folders on drill down: State Page /state/ City Page /state/city Business Page /state/city/business/
4) Location Page /locationname-city-state/ Again, just to clarify, I need help in determining what the best methodology is for achieving the greatest SEO benefits. Just by looking it would seem that choice #1 would work better because the URL's are very clear and SEF. But, at the same time it may be less intuitive for search. I'm not sure. What do you think?0 -
Spammy? Long URLs
Hi All: Is it true that URLs such as this following one are viewed as "spammy" (besides being too long) and that such URLs will negatively affect ranks for keywords and page ranks: http://www.repairsuniverse.com/ipod-parts-ipod-touch-replacement-repair-parts-ipod-touch-1st-gen-replacement-repair-parts.html My thinking is that the page will perform better once it is 301 redirected to a shorter page name, such as: http://www.repairsuniverse.com/ipod-touch-1G-replacement-parts.html It also appears that these long URLs are also more likely to break, creating unnecessary 404s. <colgroup><col width="301"></colgroup> Thanks for your insight on this issue!
Intermediate & Advanced SEO | | holdtheonion0