Weird 404 Errors
-
Hi All,
Although my Moz error scans have been pretty clean for a while, a law firm site I manage recently cropped up with 80+ 404 errors since the last scan.
I'm a little baffled as the url it shows being returned looks like this:
http://www.yoursite.com/ http://www.yoursite.com/resource.html
For some reason it seems to be initiating a query to call the root domain twice before the actual resource.
I installed ModX Revolution 2.2.6-PL on the site in question, and am hoping a canonical plugin I just started using will take care of these.
Has this happened to anyone else? What did you do to solve the issue?
Thanks for your time and any tips!
-
Hey Dan,
I had kind of assumed that it might be a false alarm from the Moz scan. I typically use Xenu to check for broken links periodically and it hasn't shown any.
Thanks for the tip!
-
Hi David
I would recommend cross checking this with Screaming Frog and/or Webmaster Tools. It's only a concern really if the web spiders and users are experiencing these 404's. I have seen it happen where Moz's crawler may hit 404's that Google and/or users do not.
If you get the errors in Screaming Frog or Webmaster Tools as well - here's what you need to do to fix them.
- Go to the source page of one of the broken links.
- View the HTML source.
- Do a control-F and search for the broken link (have it copied to your clipboard)
- Determine where in the code it's coming from.
- Then you can probably debug it from there.
Let us know if that gets you there.
Thanks!
-Dan
-
Hi David, I apologize for the delayed reply. I'm going to check with some other Associates and see if they can help trouble-shoot. In the meantime, please let us know if you have any updates. Thanks! (Christy)
-
Hey Christy,
Nope, Never did figure out an answer!
I took a break for a while but now I'm back trying to divine a solution.
Re: Dana's suggestion, I did make sure that our canonical plugin was using absolute urls, but it looks as though that did not solve the issue.
Not sure how to locate a potential PHP glitch that might be causing this... any pointers?
-
Hi David, were you able to resolve this issue?
-
Yes, I have seen this problem before. Bradley and Michael are both correct in that it had something to do with relative versus absolute URLs. In our case, it was being caused because we had relative URLs in all of our canonical tags. As soon as we fixed them to absolute URLs the strange looking 404 errors went away. Hope that helps!
-
Bradley's response is spot on. I coincidentally manage a large site in the legal area that has had errors like yours although the error isn't law related! As Bradley implies, this is typically the unintended result of code that is spitting out the unintended line feed in the CMS. I'm guessing that what you're seeing is probably the result of something related to PHP rather than an error in user input. Typically entering white space into user editable areas will result in it being stripped. When you have actual code like this inserted, it's the result of some line of PHP someone edited and saved without realizing the effect. I've had this happen before with RSS feeds where one little glitch will put a forward slash to the end of a URL and connect the beginning of another. Good luck with finding the solution, which shouldn't bee too tough.
-
%0A is the line feed character, so it looks like your CMS may be spitting out links that browsers and crawls interpret as relative links.
If your link appears like this:
[The link will be interpreted as relative and result in the link that you found on your Moz error report.
It's probably a problem with how the CMS is spitting out the href attribute, but it's hard to say without knowing more information.](%0Ahttp://www.yoursite.com/resource.html)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz-Specific 404 Errors Jumped with URLs that don't exist
Hello, I'm going to try and be as specific as possible concerning this weird issue, but I'd rather not say specific info about the site unless you think it's pertinent. So to summarize, we have a website that's owned by a company that is a division of another company. For reference, we'll say that: OURSITE.com is owned by COMPANY1 which is owned by AGENCY1 This morning, we got about 7,000 new errors in MOZ only (these errors are not in Search Console) for URLs with the company name or the agency name at the end of the url. So, let's say one post is: OURSITE.com/the-article/ This morning we have an error in MOZ for URLs OURSITE.com/the-article/COMPANY1 OURSITE.com/the-article/AGENCY1 x 7000+ articles we have created. Every single post ever created is now an error in MOZ because of these two URL additions that seem to come out of nowhere. These URLs are not in our Sitemaps, they are not in Google... They simply don't exist and yet MOZ created an an error with them. Unless they exist and I don't see them. Obviously there's a link to each company and agency site on the site in the about us section, but that's it.
Moz Pro | | CJolicoeur0 -
404 error for unknown URL that Moz is finding in our blog
I'm receiving 404 errors on my site crawl for messinastaffing.com. They seem to be generating only from our blog posts which sit on Hubspot. I've searched high and low and can't identify why our site URL is being added at the end - I've tried every link in our blog and cannot repeat the error the crawl is finding. For instance: Referer is: http://blog.messinastaffing.com/take-charge-career-story-compelling-cover-letter/ 404 error is: http://blog.messinastaffing.com/take-charge-career-story-compelling-cover-letter/www.messinastaffing.com I agree that the 404 error URL doesn't exist but I can't identify where Moz is finding it. I have approximately 75 of these errors - one for every blog on our site. Beth Morley Vice President, Operations Messina Group Staffing Solutions
Moz Pro | | MessinaGroup
(847) 692-0613 www.messinastaffing.com0 -
Error Code 804: HTTPS (SSL) Error Encountered
I'm seeing the following error in Moz as below. I have not seen any errors when crawling with different tools - is this common or is it something that needs to be looked at. SO far I only have info below. Assuming I would need to open a ticket with hosting provider for this? Thanks! Error Code 804: HTTPS (SSL) Error Encountered Your page requires an SSL security certificate to load (using HTTPS), but the Moz Crawler encountered an error when trying to load the certificate. Our crawler is pretty standard, so it's likely that other browsers and crawlers may also encounter this error. If you have this error on your homepage, it prevents the Moz crawler (and some search engines) from crawling the rest of your site.
Moz Pro | | w4rdy0 -
Htaccess and robots.txt and 902 error
Hi this is my first question in here I truly hope someone will be able to help. It's quite a detailed problem and I'd love to be able to fix it through your kind help. It regards htaccess files and robot.txt files and 902 errors. In October I created a WordPress website from what was previously a non-WordPress site it was quite dated. I had built the new site on a sub-domain I created on the existing site so that the live site could remain live whilst I created on the subdomain. The site I built on the subdomain is now live but I am concerned about the existence of the old htaccess files and robots txt files and wonder if I should just delete the old ones to leave the just the new on the new site. I created new htaccess and robots.txt files on the new site and have left the old htaccess files there. Just to mention that all the old content files are still sat on the server under a folder called 'old files' so I am assuming that these aren't affecting matters. I access the htaccess and robots.txt files by clicking on 'public html' via ftp I did a Moz crawl and was astonished to 902 network error saying that it wasn't possible to crawl the site, but then I was alerted by Moz later on to say that the report was ready..I see 641 crawl errors ( 449 medium priority | 192 high priority | Zero low priority ). Please see attached image. Each of the errors seems to have status code 200; this seems to be applying to mainly the images on each of the pages: eg domain.com/imagename . The new website is built around the 907 Theme which has some page sections on the home page, and parallax sections on the home page and throughout the site. To my knowledge the content and the images on the pages are not duplicated because I have made each page as unique and original as possible. The report says 190 pages have been duplicated so I have no clue how this can be or how to approach fixing this. Since October when the new site was launched, approx 50% of incoming traffic has dropped off at the home page and that is still the case, but the site still continues to get new traffic according to Google Analytics statistics. However Bing Yahoo and Google show a low level of Indexing and exposure which may be indicative of the search engines having difficulty crawling the site. In Google Analytics in Webmaster Tools, the screen text reports no crawl errors. W3TC is a WordPress caching plugin which I installed just a few days ago to speed up page speed, so I am not querying anything here about W3TC unless someone spots that this might be a problem, but like I said there have been problems re traffic dropping off when visitors arrive on the home page. The Yoast SEO plugin is being used. I have included information about the htaccess and robots.txt files below. The pages on the subdomain are pointing to the live domain as has been explained to me by the person who did the site migration. I'd like the site to be free from pages and files that shouldn't be there and I feel that the site needs a clean up as well as knowing if the robots.txt and htaccess files that are included in the old site should actually be there or if they should be deleted... ok here goes with the information in the files. Site 1) refers to the current website. Site 2) refers to the subdomain. Site 3 refers to the folder that contains all the old files from the old non-WordPress file structure. **************** 1) htaccess on the current site: ********************* BEGIN W3TC Browser Cache <ifmodule mod_deflate.c=""><ifmodule mod_headers.c="">Header append Vary User-Agent env=!dont-vary</ifmodule>
Moz Pro | | SEOguy1
<ifmodule mod_filter.c="">AddOutputFilterByType DEFLATE text/css text/x-component application/x-javascript application/javascript text/javascript text/x-js text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon application/json
<ifmodule mod_mime.c=""># DEFLATE by extension
AddOutputFilter DEFLATE js css htm html xml</ifmodule></ifmodule></ifmodule> END W3TC Browser Cache BEGIN W3TC CDN <filesmatch ".(ttf|ttc|otf|eot|woff|font.css)$"=""><ifmodule mod_headers.c="">Header set Access-Control-Allow-Origin "*"</ifmodule></filesmatch> END W3TC CDN BEGIN W3TC Page Cache core <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteRule .* - [E=W3TC_ENC:_gzip]
RewriteCond %{HTTP_COOKIE} w3tc_preview [NC]
RewriteRule .* - [E=W3TC_PREVIEW:_preview]
RewriteCond %{REQUEST_METHOD} !=POST
RewriteCond %{QUERY_STRING} =""
RewriteCond %{REQUEST_URI} /$
RewriteCond %{HTTP_COOKIE} !(comment_author|wp-postpass|w3tc_logged_out|wordpress_logged_in|wptouch_switch_toggle) [NC]
RewriteCond "%{DOCUMENT_ROOT}/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" -f
RewriteRule .* "/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" [L]</ifmodule> END W3TC Page Cache core BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress ....(((I have 7 301 redirects in place for old page url's to link to new page url's))).... #Force non-www:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC]
RewriteRule ^(.*)$ http://domain.co.uk/$1 [L,R=301] **************** 1) robots.txt on the current site: ********************* User-agent: *
Disallow:
Sitemap: http://domain.co.uk/sitemap_index.xml **************** 2) htaccess in the subdomain folder: ********************* Switch rewrite engine off in case this was installed under HostPay. RewriteEngine Off SetEnv DEFAULT_PHP_VERSION 53 DirectoryIndex index.cgi index.php BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /WPnewsiteDee/
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /subdomain/index.php [L]</ifmodule> END WordPress **************** 2) robots.txt in the subdomain folder: ********************* this robots.txt file is empty **************** 3) htaccess in the Old Site folder: ********************* Deny from all *************** 3) robots.txt in the Old Site folder: ********************* User-agent: *
Disallow: / I have tried to be thorough so please excuse the length of my message here. I really hope one of you great people in the Moz community can help me with a solution. I have SEO knowledge I love SEO but I have not come across this before and I really don't know where to start with this one. Best Regards to you all and thank you for reading this. moz-site-crawl-report-image_zpsirfaelgm.jpg0 -
Can Moz generate 404 Broken links report in an excel format
Hi all, Hope you are doing good and that all is well at your end. I'm a marketing team member here at Zephyr and I take care of the content side things for the team. I'm reaching out to you this morning in regards to figuring whether any Moz feature can be used for fixing broken links on our website and whether or not it can generate the broken links report which can be exported and saved in a spreadsheet. We are in the middle of cleaning up all the broken links which are currently live on our drupal site and which is why we are in need of a tool that generates the report of all the broken links onto an excel spreadsheet.Once we have the list of broken links set in a spreadsheet, we will work towards fixing those links. Meanwhile, we have also created a 404 error image for our readers who can be directed towards our homepage. Please, let me know if you can help me take care of these below-mentioned requests: 1. Suggest a tool /Moz feature which can generate the site report and list of broken links onto excel/spreadsheet 2. Once I've the list in an excel format, is there an automated way in which we can fix all the broken links which are currently live on our website, instead of manually deleting/unlinking those pages. Truly appreciate your help. Thank you! Is there an automated way in which we can fix all the broken links which are currently live on our website, instead of manually deleting/unlinking those pages.
Moz Pro | | LilianB0 -
When I did my first crawl, I was given some errors.
Do I then need to re-crawl to make sure the errors were fixed accordingly?
Moz Pro | | immortalgamer0 -
4XX (Client Error) Report - Referrer URL Feature Request
Why not allow me to click on the individual 404 errors in the 4XX (Client Error) report and allow me to actually see the where the broken links are coming from (as a hyperlinked url) so I can locate/click and fix them? Providing the referring url w/hyperlink will stop me from having to jump between the report and searching in site explorer, or downloading the complete error report and sorting.
Moz Pro | | WaterGuy0 -
Wordpress Plugin Causing Mobile Switcher 404 Errors
Hi All, Has anyone seen this 404 error before? I installed a wordpress plugin that would allow users to switch between a mobile and desktop theme. This was about 6 months ago and I thought nothing of it. Since I am new to SeoMoz, I have become aware of this lovely problem. After setting up my campaign, I now have 1600 something 404 errors due to this plugin. It looks like this plugin creates 4 to 5 links for each one of my posts and they all return up as a 404 error. Example: http://frogfanreport.com/football/page/36/ or http://frogfanreport.com/football/page/36/?wpmp_switcher=desktop I just noticed this morning in Google Webmaster tools that the errors are starting to show up there. Has anyone seen this? Or know if this is a problem and what to do? zach
Moz Pro | | TCUFrogFanReport0