Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
URL Length Issue
-
MOZ is telling me the URLs are too long.
I did a little research and I found out that the length of the URLs is not really a serious problem. In fact, others recommend ignoring the situation.
Even on their blog I found this explanation:
"Shorter URLs are generally preferable. You do not need to take this to the extreme, and if your URL is already less than 50-60 characters, do not worry about it at all. But if you have URLs pushing 100+ characters, there's probably an opportunity to rewrite them and gain value.
This is not a direct problem with Google or Bing - the search engines can process long URLs without much trouble. The issue, instead, lies with usability and user experience. Shorter URLs are easier to parse, copy and paste, share on social media, and embed, and while these may all add up to a fractional improvement in sharing or amplification, every tweet, like, share, pin, email, and link matters (either directly or, often, indirectly)."
And yet, I have these questions: In this case, why do I get this error telling me that the urls are too long, and what are the best practices to get this out?
Thank You
-
Question: if you start redirecting the longer URLs for the shorter, don't you get actually do get dinged for - ecessive 301s or daisy-chain 301s.
How do you handle this for large sites with products? Canonicals?
-
Agree with Steve above. URL length is a very minimal SEO factor, to attempt to shorten them you could do an analysis of your URLs "silo structure" and see if you can get rid of unnecessary parts of the link.
For example, if you have "xyz.com/services/marketing/seo/local-seo", you can maybe cut "services" and "marketing" (and maybe even "seo" from the structure, so the URL can just be "xyz.com/local-seo", and then 301 the old URL to the new of course.
Check out this article from Search Engine Land for more info on URL Structure for SEO- https://searchengineland.com/infographic-ultimate-guide-seo-friendly-urls-249397
-
It's a signal that may be valuable for sites but one that may not have a huge impact on it's own.
I like to have shorter URLs because it ends up being easier and more friendly to share. Longer URLs can be a deterrent. I also like to pay closer attention to the click depth of the page, not just length of the URL.
I'm not sure you'll be hurt by a long URL like:
http://www.yourtld.com/blog/02/08/18/some-crazy-long-slug-nested-with-dates-that-triggers-moz-errorsI would bet you can get that page ranked if the content is valuable enough, even with a longer URL. But the issues start to pop up If you have a site that has content only found by browsing the nested pages it is a larger issue.
For example:
http://yourtold.com/services/cleaning/windows/indoors
This is an example where I'm really worried about the length, but more importantly the overall structure of the URL and site. It may be difficult users, and crawlers to find this content making it less search friendly overall when the content is 4-5 layers deep.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Ooops. Our crawlers are unable to access that URL
hello
Moz Pro | | ssblawton2533
i have enter my site faroush.com but i got an error
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct
what is problem ?0 -
Robots.txt file issues on Shopify server
We have repeated issues with one of our ecommerce sites not being crawled. We receive the following message: Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. Read our troubleshooting guide. Are you aware of an issue with robots.txt on the Shopify servers? It is happening at least twice a month so it is quite an issue.
Moz Pro | | A_Q0 -
Moz-Specific 404 Errors Jumped with URLs that don't exist
Hello, I'm going to try and be as specific as possible concerning this weird issue, but I'd rather not say specific info about the site unless you think it's pertinent. So to summarize, we have a website that's owned by a company that is a division of another company. For reference, we'll say that: OURSITE.com is owned by COMPANY1 which is owned by AGENCY1 This morning, we got about 7,000 new errors in MOZ only (these errors are not in Search Console) for URLs with the company name or the agency name at the end of the url. So, let's say one post is: OURSITE.com/the-article/ This morning we have an error in MOZ for URLs OURSITE.com/the-article/COMPANY1 OURSITE.com/the-article/AGENCY1 x 7000+ articles we have created. Every single post ever created is now an error in MOZ because of these two URL additions that seem to come out of nowhere. These URLs are not in our Sitemaps, they are not in Google... They simply don't exist and yet MOZ created an an error with them. Unless they exist and I don't see them. Obviously there's a link to each company and agency site on the site in the about us section, but that's it.
Moz Pro | | CJolicoeur0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Need help understanding search filter URL's and meta tags
Good afternoon Mozzers, One of our clients is a real estate agent and on that site there is a search field that will allow a person to search by filtered categories. Currently, the URL structure makes a new URL for each filter option and in my Moz reports I get the report that there is missing meta data. However, the page is the same the filter options are different so I am at a loss as to how to proper tag our site to optimize those URL's. Can I rel canonical the URL's or alt rel them? I have been looking for a solution for a few days now and like I said I am at a loss of how to properly resolve these warning messages, or if I should even be concerned with the warning messages from Moz (obviously I should be concerned, they are warning messages for a reason). Thank you for your assistance in advance!
Moz Pro | | Highline_Ideas0 -
How to track data from old site and new site with the same URL?
We are launching a new site within the next 48 hours. We have already purchased the 30 day trial and we will continue to use this tool once the new site is launched. Just looking for some tips and/or best practices so we can compare the old data vs. the new data moving forward....thank you in advance for your response(s). PB3
Moz Pro | | Issuer_Direct0 -
Batch lookup domain authority on list of URL's?
I found this site the describes how to use excel to batch lookup url's using seomoz api. The only problem is the seomoz api times out and returns 1 if I try dragging the formula down the cells which leaves me copying, waiting 5 seconds and copying again. This is basically as slow as manually looking up each url. Does anyone know a workaround?
Moz Pro | | SirSud1 -
Is there a tool to upload multiple URLs and gather statistics and page rank?
I was wondering if there is a tool out there where you can compile a list of URL resources, upload them in a CSV and run a report to gather and index each individual page. Does anyone know of a tool that can do this or do we need to create one?
Moz Pro | | Brother220