Do I need to redirect soft 404s that I got from Google Webmaster Tools?
-
Hi guys,
I got almost 1000+ soft 404s from GWT. All of the soft 404s produce 200 HTTP status code but the URLs are something like the following:
http://www.example.com/search/house-for-rent
(query used: house for rent)
http://www.example.com/search/-----------rent
(query used:-------rent)
There are no listings that match these queries and there is an advanced search that is visible in these pages.
Here are my questions:
1. Do I need to redirect each page to its appropriate landing page?
2. Do I need to add user sitemap or a list of URLs where they can search for other properties?
Any suggestions would help.
-
Thanks guys for your inputs. By the way, this issue is already resolved last year. Thanks again!
-
It depends what you want to achieve. If the 404s are pages which no longer exist than it will be the fastest to use the GWMT removal tool to remove the page pattern and also add a noindex in robots.txt. In addition obviously returning a 404.
The soft 404 is a case where content is not found but HTTP-status 200 is returned - this needs to change if you currently serve non-existing pages.
We generally do the following:
- Content which we know does not exist anymore (i.e. a deleted product page or a deleted product category) is served with a SC_GONE (410) and we provide cross-selling information (i.e. display products from related categories). This works great and we have seen a boost in indexed content.
- URLs which don't exist will go through a standard 404 - this is intentional as our monitoring will pick this up. If it is a legitimate 404 put of SEO value, we will do a redirect if it makes sense, or just let Google drop it over time (takes sometimes up to 4 weeks)
You can have multiple versions of 404 pages, but this would need to be coded out - i.e. in your application server you would define 404page which then programmatically would display content depending on what you want to do.
-
I know I am way late to the party, but MagicDude4Eva, have you had success just putting a noindex header on the soft 404 pages?
That sounds like the easiest way to deal with this problem, if it works, especially since a lot of sites use dynamic URLs for product search that you don't want to de-index.
Can you have multiple 404 pages? Otherwise redirecting an empty search results page to your 404 page could be quite confusing..
-
Hi mate,
I already added the following syntax to my website's robots.txt:
User-agent: *
Disallow: /search/
I have checked the dynamic pages or URLs produced by search box (ex.http://www.domain.com/search/jhjehfjehfefe) but they are still showing in Google.com and there's still 1000+ soft 404s in my Google webmaster tools account.
I appreciate your help.
Thanks man!
-
I think if it is done carefully it adds quite a lot of value. A proper site taxonomy is obviously always better and more predictable.
-
I would never index or let google crawl search pages - very dangerous ground.
-
I would do the following:
- For valid searches returned create a proper canoncial URL (and then decide if you want to do a index,follow or a noindex,follow on the result pages). You might not necessarily want to index search results, but rather a structure of items/pages on your site.
- I would generally not index search results (rather have your pages being crawled through category structures, sitemaps and RSS feeds)
- It does sound though that the way you implemented the search is wrong - it should not result in a soft 404 - it could be as easy as making the canonical for your search just "/search" (without any search terms) and if no results are found display options to the user for search refinements
The only time I have seen soft 404s with us is in cases where we removed product pages and then displayed a generic "product not available" page with some upselling options. In this case we set a status of 410 (GONE) which resolved the soft 404 issue.
The advantage of the 410 is that your application makes the decision that a page is gone, whereas a 404 could really be just a wrong linked URL.
-
Yes Customize 404 whenever your database don't have have search results for user query then you can redirect them to that page.
Have you considered of blocking "search" results directory in Robots.txt because those pages are dynamic, they are not actually physical page so its better you block them.
-
What do you mean by default page? Is it a customized 404 page?
Thanks a lot man! I appreciate it.
-
Hi,
As per your URL, I think best solution is to block "search" directory in Robots.txt, then Google will not able to to access those pages so no error in GWT. OR you can also create default page for query which don't have any result in database.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why is google still crawling my old website pages?
Why is google still looking at my old indexed pages and not my new index. ? Why are they crawling my old website links when none of them are available? How do I overcome these problems?
Web Design | | optimalspaces0 -
Do we need both an .XML Sitemap and a .aspx sitemap?
Hi Mozers, We recently switched servers and it came to my attention that we have two sitemaps a XML version of the sitemap and a .aspx version of the sitemap. This came to light as the .aspx version of the sitemap is causing the site to come to a screeching halt as it has some complex code and lists over 80,000 products. My question is do we need both versions of the sitemap? My understanding is that the XML version is for Search Engine bots and the .aspx version is for customers. I can't imagine that anyone is using our .aspx version as it is basically a page with 80,000 links and it's buried away on the site, so we were hoping to kill off the .aspx version of the sitemap and keep the .xml version for Search Engine Bots. I wanted to check here first to make sure we did not any negative search engine implications. Any help would be most appreciated. Thanks so much! Patrick
Web Design | | gatorpool0 -
Why are there lots of 404s after setting up CDN?
I just setup Cloudfront CDN through W3 Total Cache. Everything looks good but there is one problem that I have encountered: After activating the CDN none of the images are available at the older image URLs and they are throwing a 404 error. Let me give you an example for this: 1. Before I setup the CDN, let's say an image was available at http://example.com/wp-content/uploads/2015/03/leap-of-faith.jpg 2. After I setup the CDN, the image is available at http://cdn.example.com/wp-content/uploads/2015/03/leap-of-faith.jpg and the good part is the URLs in the blog posts where this image was attached is updated to reflect the above mentioned URL. But the problem is that when visit the older URL of the image (which is what Google has crawled earlier, I get a 404 error). Can you help me how to avoid this problem? Ravi C
Web Design | | stj0 -
URLs appear in Google Webmaster Tools that I can't find on my own site?!?
Hi, I have a Magento e-commerce site (clothing) and when I had a look through some of the sections in Google Webmaster Tools I found URLs that I can't find on my site. For example, a product url maybe http://www.example.co.uk/product-url/ which is fine. In that product there maybe three sizes of the product (Small, Medium, Large) and for some reason Googlebot is sometimes finding a url like: http://www.example.co.uk/product-url/1202/ has been found and when clicked on is a live url (Status code: 200) with is one of the sizes (medium). However I have ran a site crawl in Screaming Frog and other crawl tests and can't seem to find where Googlebot is finding these URLs. I think I need to: 1. Find how Googlebot is finding these urls? 2. Find out how to keep out of index (e.g. robots.txt, canonical etc.... Any help would be much appreciated and I'm happy to share the URL with members if they think they can have a look and help with this problem. I can share specific URLs which might make the issue seem clearer, let me know? Thanks, Darrell
Web Design | | clickyleap0 -
Did i got hit from some google updates.
Hello everybody, i got a problem and i hope someone can clear it up for me. my root domain authority is 42 and home page is 52 (jumped there only yesterday) ,while my google page rank is still PR2 (same for 3 month already). 1 month ago i changed my home page design (not the text) and since then my home page just disappeared from the search engines. can somebody look on my website www.kspiercing.com , and tell me if i got hit by some panda ,koala,penguin or some other sweet Google animal . thank you very much.
Web Design | | kspiercing0 -
URLs with Hashtags - Does Google Index Them?
Hi there, I have a potential issue with a site whereby all pages are dynamically populated using Javascript. Thus, an example of an URL on their site would be www.example.com/#!/category/product. I have read lots of conflicting information on the web - some says Google will ignore everything after the hashtag; other people say that Google will now index everything after the hashtag. Does anybody have any conclusive information about this? Any links to Google or Matt Cutts as confirmation would be brilliant. P.S. I am aware about the potential issue of duplicate content, but I can assure you that has been dealt with. I am only concerned about whether Google will index full URLs that contain hashtags. Thanks all! Mark
Web Design | | markadoi840 -
Old links in Google, new website affecting SEO?
Hi Guys, I have launched my website in october and it has already been indexed by google. Now I'm going to launch my redesign which comes with a new structure, content, links, etc. So the question is, do I have to resubmit my website to google to get rid of old links? Onsite Explorer shows links to my forum which has been spammed with p* stuff which has been already indexed as well. The forum is off now. I want to use SEOmoz to track my new website but I guess this could be a hard thing as old links etc will be shown as well. Is there any tool to let Google know about my changes? Does it affect my SEO in any way? Thank you for your help. Nick
Web Design | | NickITW0