404 Error on Spider Emulators
-
I recently began working at a company called Uncommon Goods. I ran a few different spider emulators on our homepage (uncommongoods.com) and I saw a 404 Error on SEO-browser.com as well as URL errors on Summit Media's emulator and SEOMoz's crawler. It seems there is a serious problem here. How is this affecting our site from an SEO standpoint? What are the repercussions?
Also, I know we have a lot of javascript on our homepage..is this causing the 404? Any advice would be much appreciated.
Thanks!
-Zack
-
Hey Zack,
It seems your website is now returning a 200 so you apparently managed to fix the problem.
Was the problem coming from the server configuration as I suggested?
Best regards,
Guillaume Voyer. -
Hi Zack,
Yes, having the home page return a 404 error is a HUGE problem. It actually tells the engines that the page doesn't exist so they will stop crawling it and eventualy drop it from their index even if it returns content.
You should solve this problem ASAP!
Best regards,
Guillaume Voyer. -
Hi Guillaume,
Your comments about Javascript on the client side make complete sense to me now and I will examine our Resin config w/ my IT team. Thanks for explaining. Also, as per Beneeb's advice above, I'm going to try making some changes to robots.txt.
From a bigger picture perspective though, do you think this 404 Error is even that big of a deal? Are we likely to be penalized for this in terms of Page Rank, Domain Authority, etc..??
Thanks for your help!
-Zack
-
Hi Zack,
The 404 error has nothing to do with the robots.txt file, it has to do with your server configuration as I said in my answers bellow.
About the robots.txt file, I would remove the Disallow: line if you don't need to block anything.
Best regards,
Guillaume Voyer. -
Hi Beneeb,
That tool is awesome! It definitely helps, thanks! I'm going to show that report to my IT guys today. I think your guess is a very good one. Hopefully I can persuade them to make the changes and we'll see if it resolves the error.
Best Regards,
-Zack
-
Hi Zack,
To be honest with you, it was just a guess. I used a robots.txt syntax checker and saw several issues. You can check out that same tool here & run your current robots.txt file through it:
http://tool.motoricerca.info/robots-checker.phtml
I hope that gets you pushed in the right direction. I'm very new to SEO, but I've worked in the technical support world forever. So, my suggestion is only worth what you paid for it.
-
Hi Beeneeb,
Thank you for your insight. I think this makes sense as I see there is some redundancy in robots.txt as it is now. I'm curious however, why do you think that changing robots.txt will resolve the 404 error?
Best Regards,
-Zack
-
Hi Zack,
Quick followup : Your website will always return 500 to HTTP/1.0 queries. With HTTP/1.1, homepage returns 404 and subpages returns 200.
I saw the website was running on a Resin server rather than a Apache server, then, you might want to look into your Resin server's configuration.
Best regards,
Guillaume Voyer. -
Hi Zack,
Actually, when I use this http header tool and that I input http://www.uncommongoods.com/ I see that the header returned is in fact a 500 Internal Server Error.
The HTTP Header is returned by the server even before the browser can kow that their is javascript on that page so it has nothing to do with javascript.
You'll have to look at the server side as an Internal Server Error and the HTTP Header are returned by the server in opposite to the javascript that is executed client send.
Best regards,
Guillaume Voyer. -
Hi Zack,
Looking at your robots.txt file, you have several errors. I would replace your current robots.txt file with the following:
User-Agent: *
Disallow:Sitemap: http://www.uncommongoods.com/sitemap.xml
(not sure why the message truncated your sitemap file, but you get the picture)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
HTML snapshot creating soft 404
Has anyone any experience with HTML snapshots? We have a recruitment client that has HTML snapshots against all job pages as they are built with AJAX. The pages naturally die after around four weeks (the job vacancy runs out) and whilst the AJAX version of the page hard 404s, the HTML snapshot version returns a soft 404. How can we get it to mirror the dead page with 404 status?
Technical SEO | | AndrewAkesson0 -
During my last crawl suddenly no errors or warnings were found, only one, a 403 error on my homepage.
There were no changes made and all my old errors dissapeard, i think something went wrong. Is it possible to start another crawl earlyer then scheduled?
Technical SEO | | KnowHowww0 -
404 and re directs from an old design to a new one
I have a directory that I am redesigning. Currently, the old directory is all 404 pages. I need to use the URL's in the old directory for the new one. Should I redirect all the old pages to the new pages? Or is it better to delete the old pages, let them de index - so that I can use those same URLs? I really need help with this.
Technical SEO | | SwanJob0 -
Redirecting 404
Hi. I'm working on a wordpress site, which got some old deleted pages indexed and now shows a 404 (also in the results) As these old pages earlier got content and probably also some links pointing towards it, what would then be best practice to do? Should i make an 301 redirect? Make the 404 noindex?
Technical SEO | | Mickelp0 -
Mass 404 Checker?
Hi all, I'm currently looking after a collection of old newspaper sites that have had various developments during their time. The problem is there are so many 404 pages all over the place and the sites are bleeding link juice everywhere so I'm looking for a tool where I can check a lot of URLs at once. For example from an OSE report I have done a random sampling of the target URLs and some of them 404 (eek!) but there are too many to check manually to know which ones are still live and which ones have 404'd or are redirecting. Is there a tool anyone uses for this or a way one of the SEOMoz tools can do this? Also I've asked a few people personally how to check this and they've suggested Xenu, Xenu won't work as it only checks current site navigation. Thanks in advance!
Technical SEO | | thisisOllie0 -
404 and Duplicate Content.
I just submitted my first campaign. And it's coming up with a LOT of errors. Many of them I feel are out of my control as we use a CMS for RV dealerships. But I have a couple of questions. I got a 404 error and SEO Moz tells me the link, but won't tell me where that link originated from, so I don't know where to go to fix it. I also got a lot of duplicate content, and it seems a lot of them are coming from "tags" on my blog. Is that something I should be concerned about? I will have a lot more question probably as I'm new to using this tool Thanks for the responses! -Brandon here is my site: floridaoutdoorsrv.com I welcome any advice or input!
Technical SEO | | floridaoutdoorsrv0 -
302 error removing site from results
I have a client who had a screwy url structure based off of parameters and all. They hired a developer that added the keyword to the end of the url and set up 302 redirects to the new keyword included url. Since then the entire site has virtually gone missing in the results but it is not penalized. I put in a request with webmaster tools for reconsideration and they said there was no penalty. I only just found the 302 problem today and think this is probably the problem. Could this remove a site from the search results?
Technical SEO | | webfeatseo0 -
About Google Spider
Hello, people! I have some questions regarding on Google spider. Many people are saying that "Google spiders only have US IP address." Is this really true? But I also saw video from Google's offical blog and it said "Google spider come from all around the world." At this point I am really confused. Q1) I researched and it seems like Google spiders have only US IP address. THen what does exactly mean by "Google spider come from all around the world."? Q2) If Google spider have only US IP address, what happen to site which use IP delivery? Is this means that Google spider always redirect to us site since they only have US IP? Can anyone help me to understand?? One more questions! When Google analyzing for cloaking issue, do you think Google analyze when spider crawls the site or after they crawled the site?
Technical SEO | | Artience0