Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Moz-Specific 404 Errors Jumped with URLs that don't exist
-
Hello,
I'm going to try and be as specific as possible concerning this weird issue, but I'd rather not say specific info about the site unless you think it's pertinent.
So to summarize, we have a website that's owned by a company that is a division of another company. For reference, we'll say that:
OURSITE.com is owned by COMPANY1 which is owned by AGENCY1
This morning, we got about 7,000 new errors in MOZ only (these errors are not in Search Console) for URLs with the company name or the agency name at the end of the url.
So, let's say one post is: OURSITE.com/the-article/
This morning we have an error in MOZ for URLs
OURSITE.com/the-article/COMPANY1
OURSITE.com/the-article/AGENCY1
x 7000+ articles we have created. Every single post ever created is now an error in MOZ because of these two URL additions that seem to come out of nowhere.
These URLs are not in our Sitemaps, they are not in Google... They simply don't exist and yet MOZ created an an error with them. Unless they exist and I don't see them.
Obviously there's a link to each company and agency site on the site in the about us section, but that's it.
-
Not a problem! It't great that Moz's crawler picked up on this issue as it could have caused some problems over time, if it were allowed to get out of control
-
Just wanted to update quickly. The mistakes in the email links as well as the links to the two company sites proved to be the problem. After recrawling the sites, the 7,000+ errors are gone.
It's interesting because I was about to get very upset with Moz, thinking their bot had caused me half a day of headaches for nothing. Turned out they picked up an error before any other system did that would likely have done a lot of damage given that they were all contact links meant to improve transparency.
Hopefully, we caught and fixed the problem in time. In any case, thanks for your help effectdigital.
-
A more common issue than you might think and strongly likely to be a culprit
-
I've just come up on something....
In an attempt three days ago to be more transparent (it's a news site), we added "send me an email" links to each author's bio as well as links to the Company and the Agency in the footer.
Except these links weren't inserted correctly in the footer, and half the authors didn't get the right links either.
So instead of it being a "mailto" link, it was just the email which when you hovered over was the url of the page with the author email at the end... the same thing that's happening in the errors.
Same for the footer links. They weren't done correctly and sending users to OURSITE.com/AGENCY1 instead of AGENCY1's website. I've made the changes and put in the correct links. I have asked for a recrawl to see if that changes anything.
-
At this point that doesn't really matter the main thing is to analyse the referrer URL to see if there genuinely are any hidden malformed links
-
It is assuredly very weird, we just have to determine if Rogerbot has gone crazy in this Summer heat or if something went wrong with your link architecture somehow
-
Yeah that will tell you to look on the referring URL, to see if you can track down a malformed link to the error URL look in the coding
-
Other update here..
I've checked about 50 of these errors and they all say the same stats about the problem URL page.
307 words, 22 Page Authority.
I don't know if it matters, just putting it out there.
-
True, but it's as if something is creating faux URLs of a current article. Adding company names and emails to the end of the URL... It's very weird.
-
The referring URL in this case is the original url without the added element in the permalink.
So
URL: OURSITE.com/the-article/COMPANY1
Referring URL: OURSITE.com/the-article/
Does that give any more info?
-
No need to freak out though as you say "author@oursite.com" implying they are business emails (not personal emails) so you shouldn't have to worry about a data breach or anything. That is annoying though
-
The ones you want are... URL and Referring URL I believe. "URL" should be the 404 pages, "Referring URL" would be the pages that could potentially be creating your problems
-
UPDATE HERE:
I've just noticed that it is also adding the email of the author to the URL and creating an error with that as well.
So, there are three types of errors per post:
OURSITE.com/the-article/COMPANY1
-
Do you mean downloading the CSV of the issue? I tried that and it gives me the following:
Issue Type,Status,Affected Pages,Issue Grouping Identifier,URL,Referring URL,Redirect Location,Status Code,Page Speed,Title,Meta Description,Page Authority,URL Length,Title Pixel Length,Title Character Count.
Which isn't really useful as it relates to the 404 page.
I'm new to Moz, is there a direct line to an in-house resource that could tell us if it's a Rogerbot issue?
-
If you can export the data from Moz and it contains both a link source (the page the link is on) as well as a link target (the created broken URLs) then you might be able to isolate more easily, if it's you or if it's Rogerbot. If Moz UI doesn't give you that data, you'll have to ask if it's at all possible to get it from a staff member, they will likely pick this up and direct you to email (perfectly normal)
-
Thanks for the feedback. You're right about the 404 part, I should have phrased it differently. As you figured out, I meant that we are getting 404s for URLs that were never intended to exist and that we don't know how/why they are there.
We are investigating part 1, but my hope is that it is part 2.
Thanks again for taking the time to respond.
-
404s are usually for pages that 'don't exist' so that's pretty usual. This is either:
-
somewhere on your site, links are being malformed leading to these duff pages (which may be happening invisibly, unless you look deep into the base / modified source code). Google simply hasn't picked up on the error yet
-
something is wrong with Rogerbot and he's compiling hyperlinks incorrectly, thus running off to thousands of URLs that don't exist
At this juncture it could be either one, I am sure someone from Moz will be able to help you further
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Ooops. Our crawlers are unable to access that URL
hello
Moz Pro | | ssblawton2533
i have enter my site faroush.com but i got an error
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct
what is problem ?0 -
Difference between urls and referring urls?
Sorry, nit new to this side of SEO We recently discovered we have over 200 critical crawler issues on our site (mainly 4xx) We exported the CSV and it shows both a URL link and a referring URL. Both lead to a 'page not found' so I have two questions? What is the difference between a URL and a referring URL? What is the best practice/how do we fix this issue? Is it one for our web developer? Appreciate the help.
Moz Pro | | ayrutd1 -
Ive been using moz for just a minute now , i used it to check my website and find quite a number of errors , unfortunately i use a wordpress website and even with the tips , is till dont know how to fix the issues.
ive seen quite a number of errors on my website hipmack.co a wordpress website and i dont know how to begin clearing the index errors or any others for that matter , can you help me please? ghg-1.jpg
Moz Pro | | Dogara0 -
Robots.txt blocking Moz
Moz are reporting the robots.txt file is blocking them from crawling one of our websites. But as far as we can see this file is exactly the same as the robots.txt files on other websites that Moz is crawling without problems. We have never come up against this before, even with this site. Our stats show Rogerbot attempting to crawl our site, but it receives a 404 error. Can anyone enlighten us to the problem please? http://www.wychwoodflooring.com -Christina
Moz Pro | | ChristinaRadisic0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Warnings, Notices, and Errors- don't know how to correct these
I have been watching my Notices, Warnings and Errors increase since I added a blog to our WordPress site. Is this effecting our SEO? We now have the following: 2 4XX errors. 1 is for a page that we changed the title and nav for in mid March. And one for a page we removed. The nav on the site is working as far as I can see. This seems like a cache issue, but who knows? 20 warnings for “missing meta description tag”. These are all blog archive and author pages. Some have resulted from pagination and are “Part 2, Part 3, Part 4” etc. Others are the first page for authors. And there is one called “new page” that I can’t locate in our Pages admin and have no idea what it is. 5 warnings for “title element too long”. These are also archive pages that have the blog name and so are pages I can’t access through the admin to control page title plus “part 2’s and so on. 71 Notices for “Rel Cononical”. The rel cononicals are all being generated automatically and are for pages of all sorts. Some are for a content pages within the site, a bunch are blog posts, and archive pages for date, blog category and pagination archive pages 6 are 301’s. These are split between blog pagination, author and a couple of site content pages- contact and portfolio. Can’t imagine why these are here. 8 meta-robot nofollow. These are blog articles but only some of the posts. Don’t know why we are generating this for some and not all. And half of them are for the exact same page so there are really only 4 originals on this list. The others are dupes. 8 Blocked my meta-robots. And are also for the same 4 blog posts but duplicated twice each. We use All in One SEO. There is an option to use noindex for archives, categories that I do not have enabled. And also to autogenerate descriptions which I do not have enabled. I wasn’t concerned about these at first, but I read these (below) questions yesterday, and think I'd better do something as these are mounting up. I’m wondering if I should be asking our team for some code changes but not sure what exactly would be best. http://www.seomoz.org/q/pages-i-dont-want-customers-to-see http://www.robotstxt.org/meta.html Our site is http://www.fateyes.com Thanks so much for any assistance on this!
Moz Pro | | gfiedel0 -
My page has a 302 redirect and I don't know how to get rid of it!
I use Internet Officer tool to see the 302 redirect but I check the redirects in the CPanel and there are none. In the .htaccess there are none either. I don't know where else to look 😞 The url is http://servicioshosting.com Can you guys help me? I can't set up a campaign because Google can't crawl the website. I can't setup the Facebook OpenGraph because of the redirect. error.jpg
Moz Pro | | vanessacolina0 -
Use of the tilde in URLs
I just signed up for SEOMoz and sent my site through the first crawl. I use the tilde in my rewritten URLs. This threw my entire site into the Notice section 301 (permanent redirect) since each page redirects to the exact URL with the ~, not the %7e. I find conflicting information on the web - you can use the tilde in more recent coding guidelines where you couldn't in the old. It would be a huge thing to change every page in my site to use an underscore instead of a tilde int he URL. If Google is like SEOMoz and is 301 redirecting every page on the site, then I'll do it, but is it just an SEOMoz thing? I ran my site through Firebug and and all my pages show the 200 response header, not the 301 redirect. Thanks for any help you can provide.
Moz Pro | | fdb0