404 Error on Spider Emulators
-
I recently began working at a company called Uncommon Goods. I ran a few different spider emulators on our homepage (uncommongoods.com) and I saw a 404 Error on SEO-browser.com as well as URL errors on Summit Media's emulator and SEOMoz's crawler. It seems there is a serious problem here. How is this affecting our site from an SEO standpoint? What are the repercussions?
Also, I know we have a lot of javascript on our homepage..is this causing the 404? Any advice would be much appreciated.
Thanks!
-Zack
-
Hey Zack,
It seems your website is now returning a 200 so you apparently managed to fix the problem.
Was the problem coming from the server configuration as I suggested?
Best regards,
Guillaume Voyer. -
Hi Zack,
Yes, having the home page return a 404 error is a HUGE problem. It actually tells the engines that the page doesn't exist so they will stop crawling it and eventualy drop it from their index even if it returns content.
You should solve this problem ASAP!
Best regards,
Guillaume Voyer. -
Hi Guillaume,
Your comments about Javascript on the client side make complete sense to me now and I will examine our Resin config w/ my IT team. Thanks for explaining. Also, as per Beneeb's advice above, I'm going to try making some changes to robots.txt.
From a bigger picture perspective though, do you think this 404 Error is even that big of a deal? Are we likely to be penalized for this in terms of Page Rank, Domain Authority, etc..??
Thanks for your help!
-Zack
-
Hi Zack,
The 404 error has nothing to do with the robots.txt file, it has to do with your server configuration as I said in my answers bellow.
About the robots.txt file, I would remove the Disallow: line if you don't need to block anything.
Best regards,
Guillaume Voyer. -
Hi Beneeb,
That tool is awesome! It definitely helps, thanks! I'm going to show that report to my IT guys today. I think your guess is a very good one. Hopefully I can persuade them to make the changes and we'll see if it resolves the error.
Best Regards,
-Zack
-
Hi Zack,
To be honest with you, it was just a guess. I used a robots.txt syntax checker and saw several issues. You can check out that same tool here & run your current robots.txt file through it:
http://tool.motoricerca.info/robots-checker.phtml
I hope that gets you pushed in the right direction. I'm very new to SEO, but I've worked in the technical support world forever. So, my suggestion is only worth what you paid for it.
-
Hi Beeneeb,
Thank you for your insight. I think this makes sense as I see there is some redundancy in robots.txt as it is now. I'm curious however, why do you think that changing robots.txt will resolve the 404 error?
Best Regards,
-Zack
-
Hi Zack,
Quick followup : Your website will always return 500 to HTTP/1.0 queries. With HTTP/1.1, homepage returns 404 and subpages returns 200.
I saw the website was running on a Resin server rather than a Apache server, then, you might want to look into your Resin server's configuration.
Best regards,
Guillaume Voyer. -
Hi Zack,
Actually, when I use this http header tool and that I input http://www.uncommongoods.com/ I see that the header returned is in fact a 500 Internal Server Error.
The HTTP Header is returned by the server even before the browser can kow that their is javascript on that page so it has nothing to do with javascript.
You'll have to look at the server side as an Internal Server Error and the HTTP Header are returned by the server in opposite to the javascript that is executed client send.
Best regards,
Guillaume Voyer. -
Hi Zack,
Looking at your robots.txt file, you have several errors. I would replace your current robots.txt file with the following:
User-Agent: *
Disallow:Sitemap: http://www.uncommongoods.com/sitemap.xml
(not sure why the message truncated your sitemap file, but you get the picture)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I fix a 404 redirect chain
How do I fix a 404 redirect chain? I can't seem to find the answer and I'm worried about it effecting my SEO. Any help would be great!
Technical SEO | | sammecooper0 -
Googlebot and other spiders are searching for odd links in our website trying to understand why, and what to do about it.
I recently began work on an existing Wordpress website that was revamped about 3 months ago. https://thedoctorwithin.com. I'm a bit new to Wordpress, so I thought I should reach out to some of the experts in the community.Checking ‘Not found’ Crawl Errors in Google Search Console, I notice many irrelevant links that are not present in the website, nor the database, as near as I can tell. When checking the source of these irrelevant links, I notice they’re all generated from various pages in the site, as well as non-existing pages, allegedly in the site, even though these pages have never existed. For instance: https://thedoctorwithin.com/category/seminars/newsletters/page/7/newsletters/page/3/feedback-and-testimonials/ allegedly linked from: https://thedoctorwithin.com/category/seminars/newsletters/page/7/newsletters/page/3/ (doesn’t exist) In other cases, these goofy URLs are even linked from the sitemap. BTW - all the URLs in the sitemap are valid URLs. Currently, the site has a flat structure. Nearly all the content is merely URL/content/ without further breakdown (or subdirectories). Previous site versions had a more varied page organization, but what I'm seeing doesn't seem to reflect the current page organization, nor the previous page organization. Had a similar issue, due to use of Divi's search feature. Ended up with some pretty deep non-existent links branching off of /search/, such as: https://thedoctorwithin.com/search/newsletters/page/2/feedback-and-testimonials/feedback-and-testimonials/online-continuing-education/consultations/ allegedly linked from: https://thedoctorwithin.com/search/newsletters/page/2/feedback-and-testimonials/feedback-and-testimonials/online-continuing-education/ (doesn't exist). I blocked the /search/ branches via robots.txt. No real loss, since neither /search/ nor any of its subdirectories are valid. There are numerous pre-existing categories and tags on the site. The categories and tags aren't used as pages. I suspect Google, (and other engines,) might be creating arbitrary paths from these. Looking through the site’s 404 errors, I’m seeing the same behavior from Bing, Moz and other spiders, as well. I suppose I could use Search Console to remove URL/category/ and URL/tag/. I suppose I could do the same, in regards to other legitimate spiders / search engines. Perhaps it would be better to use Mod Rewrite to lead spiders to pages that actually do exist. Looking forward to suggestions about best way to deal with these errant searches. Also curious to learn about why these are occurring. Thank you.
Technical SEO | | linkjuiced0 -
Duplicate Page Titles Issue in Campaign Crawl Error Report
Hello All! Looking at my campaign I noticed that I have a large number of 'duplicate page titles' showing up but all they are the various pages at the end of the URL. Such as, http://thelemonbowl.com/tag/chocolate/page/2 as a duplicate of http://thelemonbowl.com/tag/chocolate. Any suggestions on how to address this? Thanks!
Technical SEO | | Rich-DC0 -
404 Errors in WMT
Currently my website have about 10,000 404 errors for my site as wordpress is adding /feed/ to the end of all url in my website.. Should I restrict /feed/ from the robot txt?
Technical SEO | | thewebguy30 -
Pages appear fine in browser but 404 error when crawled?
I am working on an eCommerce website that has been written in WordPress with the shop pages in E commerce Plus PHP v6.2.7. All the shop product pages appear to work fine in a browser but 404 errors are returned when the pages are crawled. WMT also returns a 404 error when ‘fetch as Google’ is used. Here is a typical page: http://www.flyingjacket.com/proddetail.php?prod=Hepburn-Jacket Why is this page returning a 404 error when crawled? Please help?
Technical SEO | | Web-Incite0 -
Disallow: /404/ - Best Practice?
Hello Moz Community, My developer has added this to my robots.txt file: Disallow: /404/ Is this considered good practice in the world of SEO? Would you do it with your clients? I feel he has great development knowledge but isn't too well versed in SEO. Thank you in advanced, Nico.
Technical SEO | | niconico1011 -
Why would SEOMoz and GWT report 404 errors for pages that are not 404ing?
Recently, I've noticed that nearly all of the 404 errors (not soft 404) reported in GWT actually resolve to a legitimate page. This was weird, but I thought it might just be old info, so I would go through the process of checking and "mark as fixed" as necessary. However, I noticed that SEOMoz is picking up on these 404 errors in the diagnostics of the site as well, and now I'm concerned with what the problem could be. Anyone have any insight into this? Rich
Technical SEO | | secretstache0 -
301 on ALL 404 for Pagerank Recovery, bad idea?
Its a bad idea to do a 301 to the home page of the site on all 404 pages as a way of pagerank recovery? If so, why? Thanks,
Technical SEO | | Bligoo0