Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Moz-Specific 404 Errors Jumped with URLs that don't exist
-
Hello,
I'm going to try and be as specific as possible concerning this weird issue, but I'd rather not say specific info about the site unless you think it's pertinent.
So to summarize, we have a website that's owned by a company that is a division of another company. For reference, we'll say that:
OURSITE.com is owned by COMPANY1 which is owned by AGENCY1
This morning, we got about 7,000 new errors in MOZ only (these errors are not in Search Console) for URLs with the company name or the agency name at the end of the url.
So, let's say one post is: OURSITE.com/the-article/
This morning we have an error in MOZ for URLs
OURSITE.com/the-article/COMPANY1
OURSITE.com/the-article/AGENCY1
x 7000+ articles we have created. Every single post ever created is now an error in MOZ because of these two URL additions that seem to come out of nowhere.
These URLs are not in our Sitemaps, they are not in Google... They simply don't exist and yet MOZ created an an error with them. Unless they exist and I don't see them.
Obviously there's a link to each company and agency site on the site in the about us section, but that's it.
-
Not a problem! It't great that Moz's crawler picked up on this issue as it could have caused some problems over time, if it were allowed to get out of control
-
Just wanted to update quickly. The mistakes in the email links as well as the links to the two company sites proved to be the problem. After recrawling the sites, the 7,000+ errors are gone.
It's interesting because I was about to get very upset with Moz, thinking their bot had caused me half a day of headaches for nothing. Turned out they picked up an error before any other system did that would likely have done a lot of damage given that they were all contact links meant to improve transparency.
Hopefully, we caught and fixed the problem in time. In any case, thanks for your help effectdigital.
-
A more common issue than you might think and strongly likely to be a culprit
-
I've just come up on something....
In an attempt three days ago to be more transparent (it's a news site), we added "send me an email" links to each author's bio as well as links to the Company and the Agency in the footer.
Except these links weren't inserted correctly in the footer, and half the authors didn't get the right links either.
So instead of it being a "mailto" link, it was just the email which when you hovered over was the url of the page with the author email at the end... the same thing that's happening in the errors.
Same for the footer links. They weren't done correctly and sending users to OURSITE.com/AGENCY1 instead of AGENCY1's website. I've made the changes and put in the correct links. I have asked for a recrawl to see if that changes anything.
-
At this point that doesn't really matter the main thing is to analyse the referrer URL to see if there genuinely are any hidden malformed links
-
It is assuredly very weird, we just have to determine if Rogerbot has gone crazy in this Summer heat or if something went wrong with your link architecture somehow
-
Yeah that will tell you to look on the referring URL, to see if you can track down a malformed link to the error URL look in the coding
-
Other update here..
I've checked about 50 of these errors and they all say the same stats about the problem URL page.
307 words, 22 Page Authority.
I don't know if it matters, just putting it out there.
-
True, but it's as if something is creating faux URLs of a current article. Adding company names and emails to the end of the URL... It's very weird.
-
The referring URL in this case is the original url without the added element in the permalink.
So
URL: OURSITE.com/the-article/COMPANY1
Referring URL: OURSITE.com/the-article/
Does that give any more info?
-
No need to freak out though as you say "author@oursite.com" implying they are business emails (not personal emails) so you shouldn't have to worry about a data breach or anything. That is annoying though
-
The ones you want are... URL and Referring URL I believe. "URL" should be the 404 pages, "Referring URL" would be the pages that could potentially be creating your problems
-
UPDATE HERE:
I've just noticed that it is also adding the email of the author to the URL and creating an error with that as well.
So, there are three types of errors per post:
OURSITE.com/the-article/COMPANY1
-
Do you mean downloading the CSV of the issue? I tried that and it gives me the following:
Issue Type,Status,Affected Pages,Issue Grouping Identifier,URL,Referring URL,Redirect Location,Status Code,Page Speed,Title,Meta Description,Page Authority,URL Length,Title Pixel Length,Title Character Count.
Which isn't really useful as it relates to the 404 page.
I'm new to Moz, is there a direct line to an in-house resource that could tell us if it's a Rogerbot issue?
-
If you can export the data from Moz and it contains both a link source (the page the link is on) as well as a link target (the created broken URLs) then you might be able to isolate more easily, if it's you or if it's Rogerbot. If Moz UI doesn't give you that data, you'll have to ask if it's at all possible to get it from a staff member, they will likely pick this up and direct you to email (perfectly normal)
-
Thanks for the feedback. You're right about the 404 part, I should have phrased it differently. As you figured out, I meant that we are getting 404s for URLs that were never intended to exist and that we don't know how/why they are there.
We are investigating part 1, but my hope is that it is part 2.
Thanks again for taking the time to respond.
-
404s are usually for pages that 'don't exist' so that's pretty usual. This is either:
-
somewhere on your site, links are being malformed leading to these duff pages (which may be happening invisibly, unless you look deep into the base / modified source code). Google simply hasn't picked up on the error yet
-
something is wrong with Rogerbot and he's compiling hyperlinks incorrectly, thus running off to thousands of URLs that don't exist
At this juncture it could be either one, I am sure someone from Moz will be able to help you further
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What's the best way to search keywords for Youtube using Moz Keyword explorer?
I want to optimize my youtube channel using identified keywords, but I'm concerned that the keywords I'm identifying work well for SERP's but might not be how people search in Youtube. How do a distinguish my keywords to be targeted for Youtube?
Moz Pro | | Dustless0 -
Pages with URL Too Long
Hello Mozzers! MOZ keeps kindly telling me the URLs are too long. However, this is largely due to the structure of E-commerce site, which has to include 'brand' 'range' and 'products' keyword. For example -
Moz Pro | | tigersohelll
https://www.choicefurnituresuperstore.co.uk/Devonshire-Rustic-Oak-Bedside-Cabinet-1-Drawer-p40668.html MOZ recommends no more than 75 characters. This means we have 25-30 characters for both the brand name and product name. Questions:
If it is an issue, how to fix it on my site?
If it's not an issue, how can we turn off this alert from MOZ?
Anyone know how big an issue URLs are as a ranking factor? I thought pretty low.0 -
Url-delimiter vs. SEO
Hi all, Our customer is building a new homepage. Therefore, they use pages, which are generated out of a special module. Like a blog-page out of the blog-module (not only for blogs, also for lightboxes). For that, the programmer is using an url-delimiter for his url-parsing. The url-delimiter is for example a /b/ or /s/. The url would look like this: www.test.ch/de/blog/b/an-article www.test.ch/de/s/management-coaching Does the url-delimiter (/b/ or /s/ in the url) have a negative influence on SEO? Should we remove the /b/ or /s/ for a better seo-performance Thank you in advance for your feedback. Greetings. Samuel
Moz Pro | | brunoe10 -
Pages with Temporary Redirects on pages that don't exist!
Hi There Another obvious question to some I hope. I ran my first report using the Moz crawler and I have a bunch of pages with temporary redirects as a medium level issue showing up. Trouble is the pages don't exist so they are being redirected to my custom 404 page. So for example I have a URL in the report being called up from lord only knows where!: www.domain.com/pdf/home.aspx This doesn't exist, I have only 1 home.aspx page and it's in the root directory! but it is giving a temp redirect to my 404 page as I would expect but that then leads to a MOZ error as outlined. So basically you could randomize any url up and it would give this error so I am trying to work out how I deal with it before Google starts to notice or before a competitor starts to throw all kinds at my site generating these errors. Any steering on this would be much appreciated!
Moz Pro | | Raptor-crew0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
What does moz trust means?
Hi guys Moz toolbar show me my 'mT' of index page of my website is 7.07. Is it good?
Moz Pro | | vahidafshari450 -
Moz & Xenu Link Sleuth unable to crawl a website (403 error)
It could be that I am missing something really obvious however we are getting the following error when we try to use the Moz tool on a client website. (I have read through a few posts on 403 errors but none that appear to be the same problem as this) Moz Result Title 403 : Error Meta Description 403 Forbidden Meta Robots_Not present/empty_ Meta Refresh_Not present/empty_ Xenu Link Sleuth Result Broken links, ordered by link: error code: 403 (forbidden request), linked from page(s): Thanks in advance!
Moz Pro | | ZaddleMarketing0 -
Fetch googlebot for sites you don't own?
I've used the "fetch as googlebot" tool in Google webmaster tools to submit links from my site, but I was wondering if there was any type of tool or submission process like this for submitting links from other sites that you do not own? The reason I ask is, I worked for several months to get a website to accept my link as part of their dealer locator tool. The link to my site was published a few months ago, however I don't think google has found it and the reason could be because you have to type in your zip code to get the link to appear. This is the website that I am referencing: http://www.ranchhand.com/dealers.php?zip=78070&radius=20 (my website is www.rangeroffroad.com) Is there any way for Google to index the link? Any ideas?
Moz Pro | | texmeix0