202 error page set in robots.txt versus using crawl-able 404 error
-
We currently have our error page set up as a 202 page that is unreachable by the search engines as it is currently in our robots.txt file. Should the current error page be a 404 error page and reachable by the search engines?
Is there more value or is it a better practice to use 404 over a 202?
We noticed in our Google Webmaster account we have a number of broken links pointing the site, but the 404 error page was not accessible.
If you have any insight that would be great, if you have any questions please let me know.
Thanks,
VPSEO
-
Since a 202 error is a server error, that's not categorizing that page right. A 404 says it doesn't exist, which is better. However, redirecting it to another similar and relevant page via a 301 is the best option.
-
I think you should return a 404 page if content no longer exists. The internet is always changing and 404 pages are a normal part of that.
You can return a 404 page, which is useful to users.
If you have a 404 page, which has backlinks to it then you can use a 301 to redirect to a related page or correct page if someone has linked incorrectly.
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl Stats Decline After Site Launch (Pages Crawled Per Day, KB Downloaded Per Day)
Hi all, I have been looking into this for about a month and haven't been able to figure out what is going on with this situation. We recently did a website re-design and moved from a separate mobile site to responsive. After the launch, I immediately noticed a decline in pages crawled per day and KB downloaded per day in the crawl stats. I expected the opposite to happen as I figured Google would be crawling more pages for a while to figure out the new site. There was also an increase in time spent downloading a page. This has went back down but the pages crawled has never went back up. Some notes about the re-design: URLs did not change Mobile URLs were redirected Images were moved from a subdomain (images.sitename.com) to Amazon S3 Had an immediate decline in both organic and paid traffic (roughly 20-30% for each channel) I have not been able to find any glaring issues in search console as indexation looks good, no spike in 404s, or mobile usability issues. Just wondering if anyone has an idea or insight into what caused the drop in pages crawled? Here is the robots.txt and attaching a photo of the crawl stats. User-agent: ShopWiki Disallow: / User-agent: deepcrawl Disallow: / User-agent: Speedy Disallow: / User-agent: SLI_Systems_Indexer Disallow: / User-agent: Yandex Disallow: / User-agent: MJ12bot Disallow: / User-agent: BrightEdge Crawler/1.0 (crawler@brightedge.com) Disallow: / User-agent: * Crawl-delay: 5 Disallow: /cart/ Disallow: /compare/ ```[fSAOL0](https://ibb.co/fSAOL0)
Intermediate & Advanced SEO | | BandG0 -
All urls seem to exist (no 404 errors) but they don't.
Hello I am doing a SEO auditing for a website which only has a few pages. I have no cPanel credentials, no FTP no Wordpress admin account, just watching it from the outside. The site works, the Moz crawler didn't report any problem, I can reach every page from the menu. The problem is that - except for the few actual pages - no matter what you type after the domain name, you always reach the home page and don't get any 404 error. I.E. Http://domain.com/oiuxyxyzbpoyob/ (there is no such a page, but i don't get 404 error, the home is displayed and the url in the browser remains Http://domain.com/oiubpoyob/, so it's not a 301 redirect). Http://domain.com/WhatEverYouType/ (same) Could this be an important SEO issue (i.e. resulting in infinite amount of duplicate content pages )? Do you think I should require the owner to prevent this from happening? Should I look into the .htaccess file to fix it ? Thank you Mozers!
Intermediate & Advanced SEO | | DoMiSoL0 -
Error 404 Search Console
Hi all, We have a number of 404 https status listed in Search Console even treated, not decrease. What happened: We launched a website with the urls www.meusite.com/url-abc. We launched these urls sitemap. Google has indexed. ... For some reason, the urls were changed four days later by some developer in my equipe. So I asked the redirection of URLs "old" already indexed to the new (of: / url-abc to / url-xyz) all correspondingly. I submit the sitemap with new urls. We fixed the internal links. And than marked as fixed in the Search Console. But it does not work! Has anyone had a similar experience? Thanks for any advice!
Intermediate & Advanced SEO | | mobic0 -
Robots txt is case senstive? Pls suggest
Hi i have seen few urls in the html improvements duplicate titles Can i disable one of the below url in the robots.txt? /store/Solar-Home-UPS-1KV-System/75652
Intermediate & Advanced SEO | | Rahim119
/store/solar-home-ups-1kv-system/75652 if i disable this Disallow: /store/Solar-Home-UPS-1KV-System/75652 will the Search engines scan this /store/solar-home-ups-1kv-system/75652 im little confused with case senstive.. Pls suggest go ahead or not in the robots.txt0 -
Have a Robots.txt Issue
I have a robots.txt file error that is causing me loads of headaches and is making my website fall off the SE grid. on MOZ and other sites its saying that I blocked all websites from finding it. Could it be as simple as I created a new website and forgot to re-create a robots.txt file for the new site or it was trying to find the old one? I just created a new one. Google's website still shows in the search console that there are severe health issues found in the property and that it is the robots.txt is blocking important pages. Does this take time to refresh? Is there something I'm missing that someone here in the MOZ community could help me with?
Intermediate & Advanced SEO | | primemediaconsultants0 -
Robots.txt help
Hi Moz Community, Google is indexing some developer pages from a previous website where I currently work: ddcblog.dev.examplewebsite.com/categories/sub-categories Was wondering how I include these in a robots.txt file so they no longer appear on Google. Can I do it under our homepage GWT account or do I have to have a separate account set up for these URL types? As always, your expertise is greatly appreciated, -Reed
Intermediate & Advanced SEO | | IceIcebaby0 -
Robots.txt Blocked Most Site URLs Because of Canonical
Had a bit of a "Gotcha" in Magento. We had Yoast Canonical Links extension which worked well , but then we installed Mageworx SEO Suite.. which broke Canonical Links. Unfortunately it started putting www.mysite.com/catalog/product/view/id/516/ as the Canonical Link - and all URLs with /catalog/productview/* is blocked in Robots.txt So unfortunately We told Google that the correct page is also a blocked page. they haven't been removed as far as I can see but traffic has certainly dropped. We have also , at the same time had some Site changes grouping some pages & having 301 redirects. Resubmitted site map & did a fetch as google. Any other ideas? And Idea how long it will take to become unblocked?
Intermediate & Advanced SEO | | s_EOgi_Bear0 -
Are 17000+ Not Found (404) Pages OK?
Very soon, our website will go a rapid change which would result in us removing 95% or more old pages (Right now, our site has around 18000 pages indexed). It's changing into something different (B2B from B2C) and hence our site design, content etc would change. Even our blog section would have more than 90% of the content removed. What would be the ideal scenario be? Remove all pages and let those links be 404 pages Remove all pages and 301 redirect them to the home page Remove all unwanted pages and 301 redirect them to a separate page explaining the change (Although it wouldn't be that relevant since our audience has completely changed)- I doubt it would be ideal since at some point, we'd need ot remove this page as well and again do another redirection
Intermediate & Advanced SEO | | jombay0