Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Strange URL's for client's site
-
We just picked up a new client and I've been doing some digging around on their site. They have quite the wide variety of URL's that make for a rather confusing experience.
One of the milder examples is their "About" page. Normally I would expect something along the lines of:
I see:
www.website.com/default.asp?Page=About
I'm typically a graphic designer and know basically nothing about code, but I just assume this has something funky to do with how their website was constructed. I'm assuming this isn't particularly SEO friendly, but it doesn't seem too bad. Until I got to another section of their site. It's a section that logically should look like:
www.website.com/training/public-seminars
It's:
www.website.com/default.asp?Page=MT&Area=Seminars&Sub=MRM
Now that's nonsensical to me! Normally if a client has terrible URL's, I'd say let's do some redirects, but I guess I'm a little intimidated by these. Do the URL's have to be structured like this for some reason? Am I missing some important area of coding here?
However, the most bizarre example is a link back to their website from yellowpages.com. Where normally I would expect it to lead to their homepage, I get this bizarre-looking thing:
And as you browse through the site, that strange domain stays. For example the About page is now:
http://website1-px.rtrk.com/default.asp?Page=About
I would try to google this but I have no idea where to even start! What is going on with these links? Will we be able to fix them to something presentable without breaking their website?
-
Thank you for the great advice Dirk!
I will likely have to get one my more technical co-workers to help with this, but now I can at least adequately describe the problem and solution to this. Three separate URL's for the home page alone is definitely a priority to be fixed.
Thank you again!
-
Hi,
You're quite right that having clean readable url's are usefull - both for visitors & bots.
There is no technical need to have these 'ugly' urls - as they can always be rewritten to something nicer. You will have to use a combination of URL rewriting & redirects) - you can find some useful links here on how to implement the rewriting (the article is not very recent - but these basics haven't changed). If they use a CMS it could also be useful to check the documentation - almost every decent CMS offers some build-in rewriting functionality.
The second issue with the strange domain name can be solved with a 301 redirect - by adding these lines in the .htaccess file of the "strange domain"
RewriteEngine On
RewriteCond %{HTTP_HOST} ^olddomain.com$ [OR]
RewriteCond %{HTTP_HOST} ^www.olddomain.com$
RewriteRule (.*)$ http://www.newdomain.com/$1 [R=301,L](no need to tell that you'll have to replace olddomain & newdomain by the actual domain names)
Apart from the wrong domain the issue with the tracking parameters in
could be solved by either a redirect or a canonical url. With the redirect rule above the webwite-px.rtk.com will be redirected to www.yourdomain.com - but this doesn't get rid of the tracking code.
You could put a self referencing canonical url in the head of the pages -
or strip of the parameters using a redirect (you can find an example on how this could be done here
If you use the canonical solution - it could be a good idea to strip off the parameters in Google Analytics
Hope this helps,
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Tools/Software that can crawl all image URLs in a site
Excluding Screaming Frog, what other tools/software to use in order to crawl all image URLs in a site? Because in Screaming Frog, they don't crawl image URLs which are not under the site domain. Example of an image URL outside the client site: http://cdn.shopify.com/images/this-is-just-a-sample.png If the client is: http://www.example.com, Screaming Frog only crawls images under it like, http://www.example.com/images/this-is-just-a-sample.png
Technical SEO | May 5, 2016, 10:08 PM | jayoliverwright0 -
URL Structure On Site - Currently it's domain/product-name NOT domain/category/product name is this bad?
I have a eCommerce site and the site structure is domain/product-name rather than domain/product-category/product-name Do you think this will have a negative impact SEO Wise? I have seen that some of my individual product pages do get better rankings than my categories.
Technical SEO | Feb 1, 2016, 12:45 PM | the-gate-films0 -
Category URL Pagination where URLs don't change between pages
Hello, I am working on an e-commerce site where there are categories with multiple pages. In order to avoid pagination issues I was thinking of using rel=next and rel=prev and cannonical tags. I noticed a site where the URL doesn't change between pages, so whether you're on page 1,2, or 3 of the same category, the URL doesn't change. Would this be a cleaner way of dealing with pagination?
Technical SEO | Nov 30, 2015, 10:01 PM | whiteonlySEO0 -
Good alternatives to Xenu's Link Sleuth and AuditMyPc.com Sitemap Generator
I am working on scraping title tags from websites with 1-5 million pages. Xenu's Link Sleuth seems to be the best option for this, at this point. Sitemap Generator from AuditMyPc.com seems to be working too, but it starts handing up, when a sitemap file, the tools is working on,becomes too large. So basically, the second one looks like it wont be good for websites of this size. I know that Scrapebox can scrape title tags from list of url, but this is not needed, since this comes with both of the above mentioned tools. I know about DeepCrawl.com also, but this one is paid, and it would be very expensive with this amount of pages and websites too (5 million ulrs is $1750 per month, I could get a better deal on multiple websites, but this obvioulsy does not make sense to me, it needs to be free, more or less). Seo Spider from Screaming Frog is not good for large websites. So, in general, what is the best way to work on something like this, also time efficient. Are there any other options for this? Thanks.
Technical SEO | Apr 3, 2015, 3:14 PM | blrs120 -
Why do some URLs for a specific client have "/index.shtml"?
Reviewing our client's URLs for a 301 redirect strategy, we have noticed that many URLs have "/index.shtml." The part we don'd understand is these URLs aren't the homepage and they have multiple folders followed by "/index.shtml" Does anyone happen to know why this may be occurring? Is there any SEO value in keeping the "/index.shtml" in the URL?
Technical SEO | Jan 15, 2014, 7:18 PM | FranFerrara0 -
Javascript to manipulate Google's bounce rate and time on site?
I was referred to this "awesome" solution to high bounce rates. It is suppose to "fix" bounce rates and lower them through this simple script. When the bounce rate goes way down then rankings dramatically increase (interesting study but not my question). I don't know javascript but simply adding a script to the footer and watch everything fall into place seems a bit iffy to me. Can someone with experience in JS help me by explaining what this script does? I think it manipulates the reporting it does to GA but I'm not sure. It was supposed to be placed in the footer of the page and then sit back and watch the dollars fly in. 🙂
Technical SEO | Jun 27, 2012, 4:55 PM | BenRWoodard1 -
How to extract URLs from a site (without bringing the server down!)
Hi everybody. One of my clients is migrating to a new ecommerce platform, and we need to get a list of urls from the existing site to start mapping out the 301 redirects. Usually, I'd use a tool like Xenu or Integrity to crawl and output a list. However, the database and server setup is so bad that it can't handle the requests from these tools and it sends the site down. This, unsurprisingly, is one of the reasons for the migration. Does anybody know of a way to get a full list of urls without having to make a bunch of http requests which will kill the site? Any advice would be much appreciated!
Technical SEO | Nov 14, 2011, 7:03 PM | neooptic0 -
What's the difference between a category page and a content page
Hello, Little confused on this matter. From a website architectural and content stand point, what is the difference between a category page and a content page? So lets say I was going to build a website around tea. My home page would be about tea. My category pages would be: White Tea, Black Tea, Oolong Team and British Tea correct? ( I Would write content for each of these topics on their respective category pages correct?) Then suppose I wrote articles on organic white tea, white tea recipes, how to brew white team etc...( Are these content pages?) Do I think link FROM my category page ( White Tea) to my ( Content pages ie; Organic White Tea, white tea receipes etc) or do I link from my content page to my category page? I hope this makes sense. Thanks, Bill
Technical SEO | May 22, 2011, 5:03 PM | wparlaman0