Issues with Moz producing 404 Errors from sitemap.xml files recently.
-
My last campaign crawl produced over 4k 404 errors resulting from Moz not being able to read some of the URLs in our sitemap.xml file. This is the first time we've seen this error and we've been running campaigns for almost 2 months now -- no changes were made to the sitemap.xml file. The file isn't UTF-8 encoded, but rather Content-Type:text/xml; charset=iso-8859-1 (which is what Moveable Type uses). Just wondering if anyone has had a similar issue?
-
Hi Barb,
I am sure Joel will chime in also but just to clarify that it is probably not the utf8 encoding or lack of it that is causing the issue. At least with the sitemap urls it is simply the formatting of the xml that is being produced. As to if the other errors you are seeing are caused by the same kind of thing, if you are seeing references to the same encoded characters (%0A%09%) then the answer is most likely yes.
So the issue is not utf8 encoding related (there are plenty of non utf8 encoded sites on the web still!) but how the moz crawler is reading your links and if other tools/systems will be having the same troubles. Have you looked in google webmaster tools to see if it reports similar 404 errors from the sitemap or elsewhere? If you see similar errors in GWT then the issue is likely not restricted to the moz crawler only.
Beyond that, since for the sitemap at least the fix should be relatively simple and quite possibly the other moz errors you see will also be able to be fixed easily by making small adjustments to the templates and removing the extra line breaks/tabs which are creating the issue then it is worth doing so that these errors are removed and you can concentrate on the 'real' errors without all the noise.
-
Joel,
The latest 404 errors have the same type of issue, and are all over place in terms of referrer (none are the sitemap.xml) that I can see.
My question is, can the fact that we don't use the UTF-8 encoding in our site potentially cause issues with other reporting? This is not something we can change easily and I don't want to waste a great deal of effort sorting through "red herring" issues due to the encoding we use on the site.
thoughts?
barb
-
Thanks Joel,
We're looking into this.
barb
-
Thanks Lynn,
We are looking at that. The 4k 404 errors are gone now, but it's possible they will return.
It's a major change for us to switch to UTF-8, so it's not something that will happen anytime soon. I'll just have to be aware that it might be causing issues.
barb
-
Hey Brice,
I just to add to Lynn's great answer with the reason you're seeing the URLs the way they are and to reinforce that.
You have it formatted as such:
<loc>http://www.cmswire.com/cms/web-cms/david-hillis-10-predictions-for-web-content-management-in-2011-009588.php</loc>The crawler converts everything to URL encoding. So those line feeds and tabs will be converted to percentage tags. The reason your root domain is there is because %0A is not the proper start of a URL so RogerBot assumes it's a relative link to the domain your sitemap is on.
The encoding thing is probably not affecting this.
Cheers,
Joel. -
Hi,
It can be frustrating I know, but if you are methodical you will get to the bottom of all errors and then feel much better
Not sure why the number of 404s would have gone down, but in regards the sitemap itself the moz team might be right that utf-8 encoding could be part of the problem. I think it might be more to do with some non visible formatting/characters being added to your site map during creation. %09 is a url encoded tab and %0A is a url encoded line feed, it looks to me that these are getting into your sitemap even though they are not actually visible.
If you download your site map you will see that many (but not all) the urls look like this:
<loc>http://www.cmswire.com/cms/web-cms/david-hillis-10-predictions-for-web-content-management-in-2011-009588.php</loc>Note the new lines and the indent. Some other urls do not have this format for example:
<loc>http://www.cmswire.com/news/topic/impresspages</loc>
It would be wise to ensure both the file creating the sitemap and the sitemap itself are in utf-8, but also it could be as simple as going into the file creating the sitemap and removing those line breaks. Once that is done wait for the next crawl and see if it brings the error numbers down (it should). As for the rest of the warnings, just be methodical, identify where they are occurring and why and work through them. You will get to few or zero warnings, and you will feel good about it!
-
interesting that a new crawl just completed and now I only have 307 404 Errors and a lot of other different errors and warnings. It's frustrating to see such different things each week.
barb
-
Hi Lynn,
I did download the csv and found all the 404 errors were generate from our sitemap.xml file. Here's what the URLs look like:
referring URL is http://www.cmswire.com/sitemap.xml
You'll notice that there is odd formatting wrapping the URL (%0A%09%09%09) + the extra http://www.cmswire to the front of the URL- which does not exist in the actual sitemap.xml file if I view it separately.
Also: Moz support looked at our campaign and they thought the problem was that our sitemap wasn't UTF-8 encoded.
Any ideas?
-
Hi Brice,
What makes you think the issue is that moz cannot read the urls? In the first instance I would want to make sure that something else is not going wrong by checking the urls moz is flagging as 404s, ensuring they actually do or do not exist and if the latter finding out where the link is coming (be it the sitemap or another page on the site). You may have already done this, but if not you can get all this information by downloading the error report in csv and then filtering in excel to get data for 404 pages only.
If you have done this already then if you give us a sample or two of the urls moz is flagging along with the referring url and your sitemap url we might be able to diagnose the issue better. It would be unusual for the moz crawler to start throwing errors all of a sudden if nothing else has changed. Not saying it is impossible for it to be an error with moz, just saying that the chances are on the side of something else going on.
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Looking for some help to train corporation on MOZ?
Short term. Basically helping a company train internal SEO people from round the globe. Involves time and fee to create your training plus several 4 hour sessions and possible follow up consulting. Pays well. Email me at ablanco@infusion.com. Thanks Al Blanco
Moz Pro | | InfusionAdmin0 -
Duplicate Content errors - not going away with canonical
I am getting Duplicate Content Errors reported by Moz on search result pages due to parameters. I went through the document on resolving Duplicate Content errors and implemented the canonical solution to resolve it. The canonical in the header has been in place for a few weeks now and Moz is still showing the pages as Duplicate Content despite the canonical reference. Is this a Moz bug? http://mathematica-mpr.com/news/?facet={81C018ED-CEB9-477D-AFCC-1E6989A1D6CF}
Moz Pro | | jpfleiderer0 -
Screaming frog, Xenu, Moz giving wrong results
Hello guys and gals, This is a very odd one, I've a client's website and most of the crawlers I'm using are giving me weird/ wrong results. For now lets focus on screaming frog, when I crawl the site it will list e.g. meta titles as missing (not all of them though), however going into the site the title is not missing, and Google seems to be indexing the site fine. The robots.txt are not affecting the site (I've also tried changing the user agent). The other odd thing is SF gives a 200 code but as a status tells me "connection refused" even though it's giving me data. I'm unable to share the clients site, has any one else seen this very odd issue? And solutions for it? Many thanks in advanced for any help,
Moz Pro | | GPainter0 -
New to Moz and wanted a bit of help with my report
Hi, I have used the MOZ report to analyse one of my friends sites and I wanted to query a few warnings it highlighted and I just wanted people's thought on how important they thought these were: The first is dupliate descriptions/titles. This is mainly down the e-commerce pages. Fist duplicate content:
Moz Pro | | dannylancs
On some pages the description is identical and all that is different is the title and picture, is this an issue? Duplicate pages:
Due to the way the website folder structure/catergories has been created some pages are identical but because the product comes under 2 cetergories there is 2 seperate pages, should we use the canonical on one of the pages? Also regarding the canonical tag, they have put link rel="canonical" on every page and got it to point at itself, so not really being used in the way it is meant to be. Could something like this cause any harm? The final thing is internal linking back to the homepage. If for example the homepage is http://www.test.com, when linking back is it best to put the full URL over "index.html" even though they are the same page? Any help really appreciated Dan0 -
Couple of Moz's bugs
There are still some bugs in a new Moz: I can still find a lot of mentions of "SEOMoz", for example: in a footer, in a Q/A form (SEOMoz Resources), in a new question form (SEOmoz PRO Application, SEOmoz Tools, etc.) On a main form (http://moz.com/pro/home) sometimes my full name is not visible at all, sometimes my MozPoints are hidden (on a left top corner); There is not direct link from http://devblog.moz.com/ to a main website; Regards.
Moz Pro | | ditoroin0 -
Seomoz crawl: 4XX (Client Error) How to find were the error are?
I got eight 404 errors with the Seomoz crawl, but the report does not says where the 404 page is linked from (like it does for dup content), or I'm I missing something? Thanks
Moz Pro | | PaddyDisplays0 -
Seo moz has only crawled 2 pages of my site. Ive been notified of a 403 error and need an answer as to why my pages are not being crawled?
SEO Moz has only crawled 2 pages of my clients site. I have noticed the following. A 403 error message screaming frog also cannot crawl the site but IIS can. Due to the lack of crawling ability, im getting no feed back on my on page optimization rankings or crawl diagnostics summary, so my competitive analysis and optimization is suffering Anybody have any idea as to what needs to be done to rectify this issue as access to the coding or cms platform is out of my hands. Thank you
Moz Pro | | nitro-digital0 -
Crawl Errors from URL Parameter
Hello, I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages associated with /login. I will see site.com/login?r=http://.... and have several duplicate content issues associated with those urls. Seeing this, I checked WMT to see if the Google crawler was showing this error as well. It wasn't. So what I ended doing was going to the robots.txt and disallowing rogerbot. It looks like this: User-agent: rogerbot Disallow:/login However, SEOmoz has crawled again and it still picking up on those URLs. Any ideas on how to fix? Thanks!
Moz Pro | | WrightIMC0