Single Folder vs Root
-
I'm working on a multi-state attorney website and I'm going back and forth on URL's. I thought I'd see what the community thinks.
lawsite.com/los-angeles/car-accident-lawyer vs. lawsite.com/los-angeles-car-accident-lawyer
I should note this site will have over a dozen city locations, with different practices.
-
My Friend,
I think that is fine. I would do that.
I wish you all the best in your project!
-
Dont overblow it really. I'm working on that too right now with positive effects, i.e /subject/another-subject/, it would be good if you link all the independent pages from /subject/ as well including a dropdown menu on /subject/ with all /another-subjects/.
-
Agreed, thanks!
-
Thanks for the great reply. Yes, quite a few practice areas. So it sounds like I should go the city folder route.
Follow up question; think I should do /westcehster-attorney/slip-and-fall-accident-lawyer, or am I getting a little spammy?
-
I recommend Joseph's approach. There are many benefits to this approach: manageability, scalability, and seo. You can address all the practice areas available in specific locations as well as rank the firm more strongly in each location by key of relevance.
-
Hello Friend,
Good question.
Are they only doing car accident cases? I assume that they are doing more.
Doing a folder for the city will allow you to create a hub city page that should link out to different practices for that city, and they should all link back to support the hub page. See how they did it.
https://mirmanlawyers.com/westchester/ (tier 2, pillar page, hub page)
https://mirmanlawyers.com/westchester/car-accident-lawyer/
https://mirmanlawyers.com/westchester/slip-and-fall-accident-lawyer/
If you only have one practice to focus one, I suggest you go for the. lawsite.com/los-angeles-car-accident-lawyer, but if you have many practices, I would go for lawsite.com/los-angeles/car-accident-lawyer and create a valuable sub-page for each practice and each location.
I wish you the best of luck with your project!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Flat architecture or deep folders?
We have an e-commerce client that is launching a new site. In setting up for it they decided that they want to change their navigation on the site and url structure. So everything being even, the new site will have appropriate 301 and it's built on Magento so the product pages are all structured as website.com/product-A but the category pages will now be deeper than before. So before it was website.com/product-category/product-sub-category will now be website.com/more generic category/product-category/new-subcategory/product-category. Hope that makes sense. I'm not as worried about the 301's or specific products but I'm worried the category pages dropping a folder level will hurt page authority. Any thoughts, am I being overly nervous?
Intermediate & Advanced SEO | | BCutrer0 -
.ac.uk subdomain vs .co.uk domain
I'd be grateful if I could check my thinking... I've agreed to give some quick advice to a non profit organisation who are in the process of moving their website from an ac.uk subdomain to a .co.uk domain. They believe that their SEO can be improved considerably by making this migration. From my experience, I don't see how this could be the case. Does the unique domain in itself offer enough ranking benefit to justify this approach? The subdomain is on a very high authority domain with many pre-existing links, which makes me even more nervous about this approach. Does anyone have any opinions on this that they could share please? I'm guessing that it is possible to migrate safely and that there might be branding advantages, but from an actual SEO point of view there is not that much benefit? It looks like most of their current traffic is branded traffic.
Intermediate & Advanced SEO | | RG_SEO0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Linking and non-linking root domains
Hi, Is there any affect on SEO based on the ratio of linking root domains to non-linking root domains and if so what is the affect? Thanks
Intermediate & Advanced SEO | | halloranc0 -
Duplicate Title - Magento Products / Kunena Forum - Nofollow vs. Follow
Hello Mozzers! This is my first post so hopefully I can explain this in a way that is clear! I just signed up for my PRO membership and wanted to finally start hacking away at this companies site that I work for. I ran my report and noticed my Crawl Diagnostics: 715 Dupliate Page Titles I noticed that a lot those were coming from the customer login in magento. So I added the: User-agent: *Disallow:/customer/ as mentioned in this link: http://stackoverflow.com/questions/10742052/duplicate-content-issues-for-login-page Any suggestions on that? Also, We have a Kunena forum on our site and have noticed all of the nofollow links. People say I should change them to follow links, but other people say leave them or I will get penalized. Any advice would be greatly appreciated!
Intermediate & Advanced SEO | | NRMWEBWORKS0 -
How to redirect www vs. non-www in IIS
I have been wanting to set our site up to redirect non-www to www for the SEO benefits so often described here on SeoMoz. I see a lot on Apache but not so much for IIS. Is there any developers here that can point me to a how tutorial for people with little IIS experiences?
Intermediate & Advanced SEO | | KJ-Rodgers0 -
Pros and cons of seperate sites vs. subdomains
First timer and new to SEO We are designing a website for a customer in south america that has 3 distinct divisions. We want to develop the site in the most SEO effective way possible. Each division will have its own keyword focus, its own associations and its own links. They will all link to each other from the main page company.com. we were thinking of creating 4 different seperate domains such as... www.company.com - basic high level company information with links to the other external sites below. www.company-contructionsoftware.com www.company-itservices.com www.company-graphicdesign.com so my questions are: 1- is it better in the long run to have domains that have the search terms in the url like specified above? We can optimize for the main site as well as the individual sites separately 2- would the result be the same using subdomains? for example, itservices.company.com 3- possibly hosting the 3 different sites in different locations? We want to make sure that we are building using the the best possible architecture for future optimization and internet marketing. What are the pros and cons? Thanks!!!!
Intermediate & Advanced SEO | | brantwadz0 -
Article + 2 links to the same root domain
I am writing an article that has 2 links: The fist one is to : http://xxx.net/the-be... The second one is to : http://xxx.net/ The links are with different ancor texts and I am wondering about the link power they will bring to my website. Will both links count? Will the first one send significantly more PR than the second one ? I am asking this because my MAIN objective is the second link. Much Thanks Alex
Intermediate & Advanced SEO | | IamSharp0