2.3 million 404s in GWT - learn to live with 'em?
-
So I’m working on optimizing a directory site. Total size: 12.5 million pages in the XML sitemap. This is orders of magnitude larger than any site I’ve ever worked on – heck, every other site I’ve ever worked on combined would be a rounding error compared to this.
Before I was hired, the company brought in an outside consultant to iron out some of the technical issues on the site. To his credit, he was worth the money: indexation and organic Google traffic have steadily increased over the last six months. However, some issues remain. The company has access to a quality (i.e. paid) source of data for directory listing pages, but the last time the data was refreshed some months back, it threw 1.8 million 404s in GWT. That has since started to grow progressively higher; now we have 2.3 million 404s in GWT.
Based on what I’ve been able to determine, links on this particular site relative to the data feed are broken generally due to one of two reasons: the page just doesn’t exist anymore (i.e. wasn’t found in the data refresh, so the page was simply deleted), or the URL had to change due to some technical issue (page still exists, just now under a different link). With other sites I’ve worked on, 404s aren’t that big a deal: set up a 301 redirect in htaccess and problem solved. In this instance, setting up that many 301 redirects, even if it could somehow be automated, just isn’t an option due to the potential bloat in the htaccess file.
Based on what I’ve read here and here, 404s in and of themselves don’t really hurt the site indexation or ranking. And the more I consider it, the really big sites – the Amazons and eBays of the world – have to contend with broken links all the time due to product pages coming and going. Bottom line, it looks like if we really want to refresh the data on the site on a regular basis – and I believe that is priority one if we want the bot to come back more frequently – we’ll just have to put up with broken links on the site on a more regular basis.
So here’s where my thought process is leading:
- Go ahead and refresh the data. Make sure the XML sitemaps are refreshed as well – hopefully this will help the site stay current in the index.
- Keep an eye on broken links in GWT. Implement 301s for really important pages (i.e. content-rich stuff that is really mission-critical). Otherwise, just learn to live with a certain number of 404s being reported in GWT on more or less an ongoing basis.
- Watch the overall trend of 404s in GWT. At least make sure they don’t increase. Hopefully, if we can make sure that the sitemap is updated when we refresh the data, the 404s reported will decrease over time.
We do have an issue with the site creating some weird pages with content that lives within tabs on specific pages. Once we can clamp down on those and a few other technical issues, I think keeping the data refreshed should help with our indexation and crawl rates.
Thoughts? If you think I’m off base, please set me straight.
-
I was actually thinking about some type of wildcard rule in htaccess. This might actually do the trick! Thanks for the response!
-
Hi,
Sounds like you’ve taken on a massive job with 12.5 million pages, but I think you can implement a simple fix to get things started.
You’re right to think about that sitemap, make sure it’s being dynamically updated as the data refreshes, otherwise that will be responsible for a lot of your 404s.
I understand you don’t want to add 2.3 million separate redirects to your htaccess, so what about a simple rule - if the request starts with ^/listing/ (one of your directory pages), is not a file and is not a dir, then redirect back to the homepage. Something like this:
does the request start with /listing/ or whatever structure you are using
RewriteCond %{REQUEST_URI} ^/listing/ [nc]
is it NOT a file and NOT a dir
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
#all true? Redirect
RewriteRule .* / [L,R=301]This way you can specify a certain URL structure for the pages which tend to turn to 404s, any 404s outside of your first rule will still serve a 404 code and show your 404 page and you can manually fix these problems, but the pages which tend to disappear can all be redirected back to the homepage if they’re not found.
You could still implement your 301s for important pages or simply recreate the page if it’s worth doing so, but you will have dealt with a large chunk or your non-existing pages.
I think it’s a big job and those missing pages are only part of it, but it should help you to sift through all of the data to get to the important bits – you can mark a lot of URLs as fixed and start giving your attention to the important pages which need some works.
Hope that helps,
Tom
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content That Isn't Duplicated
In Moz, I am receiving multiple messages saying that there is duplicate page content on my website. For example, these pages are being highlighted as duplicated: https://www.ohpopsi.com/photo-wallpaper/made-to-measure/pop-art-graffiti/farm-with-barn-and-animals-wall-mural-3824 and https://www.ohpopsi.com/photo-wallpaper/made-to-measure/animals-wildlife/little-elephants-garden-seamless-pattern-wall-mural-3614. As you can see, both pages are different products, therefore I can't apply a 301 redirect or canonical tag. What do you suggest?
Intermediate & Advanced SEO | | e3creative0 -
HTTP → HTTPS Migration - Both Websites Live Simultaneously
We have a situation where a vendor, who manages a great deal of our websites, is migrating their platform to HTTPS. The problem is that the HTTP & new HTTPS versions will be live simultaneously (in order to give clients time to audit both sites before the hard switch). I know this isn't the way that it should be done, but this is the problem we are facing. My concern was that we would have two websites in the index, so I suggested that they noindex the new HTTPS website until we are ready for the switch. They told me that they would just add cannonicals to the HTTPS that points to the HTTP and when it's time for the switch reverse the cannonicals. Is this a viable approach?
Intermediate & Advanced SEO | | AMSI-SEO0 -
Why the sudden increase in soft 404s?
I haven't made any changes to my site but in a week I am showing 30-40 soft 404s in Webmaster Tools. This just started happening in the last 2 weeks. When I click to go to the pages they are fine, and even fetch and render works fine on the pages.
Intermediate & Advanced SEO | | EcommerceSite0 -
Can't get page moving!
Hi all. I've been working on a page for months now and can't seem to make any progress. I'm trying to get http://www.alwayshobbies.com/dolls-houses on the first page for term 'dolls houses'. I've done the following: Cleaned up the site's overall backlink profile Built some new links to the page Added 800 words of new copy Reduced the number of keyword instances on the page below 15 Any advice would be much appreciated. I don't think it's down to links as the DA/PA isn't wildly different from its competitors. Thanks!
Intermediate & Advanced SEO | | Blink-SEO0 -
Dilemma: Should we use pagination or 'Load More' Function
In the interest of pleasing Google with their recent updates and clamping down on duplicate content and giving a higher preference to pages with rich data, we had a tiny dilemma that might help others too. We have a directory like site, very similar to Tripadvisor or Yelp, would it be best to: A) have paginated content with almost 40 pages deep of data < OR > B) display 20 results per page and at the bottom have "Load More" function which would feed more data only once its clicked. The problem we are having now is that deep pages are getting indexed and its doing us no good, most of the juice and page value is on the 1st one, not the inner pages. Wondering what are the schools of thought on this one. Thanks
Intermediate & Advanced SEO | | danialniazi0 -
Why is Google Webmaster Tools reporting a massive increase in 404s?
Several weeks back, we launched a new website, replacing a legacy system moving it to a new server. With the site transition, webroke some of the old URLs, but it didn't seem to be too much concern. We blocked ones I knew should be blocked in robots.txt, 301 redirected as much duplicate data and used canonical tags as far as I could (which is still an ongoing process), and simply returned 404 for any others that should have never really been there. For the last months, I've been monitoring the 404s Google reports in Web Master Tootls (WMT) and while we had a few hundred due to the gradual removal duplicate data, I wasn't too concerned. I've been generating updated sitemaps for Google multiple times a week with any updated URLs. Then WMT started to report a massive increase in 404s, somewhere around 25,000 404s per day (making it impossible for me to keep up). The sitemap.xml has new URL only but it seems that Google still uses the old sitemap from before the launch. The reported sources of 404s (in WMT) don't exist anylonger. They all are coming from the old site. I attached a screenshot showing the drastic increase in 404s. What could possibly cause this problem? wmt-massive-404s.png
Intermediate & Advanced SEO | | sonetseo0 -
Pagination Question: Google's 'rel=prev & rel=next' vs Javascript Re-fresh
We currently have all content on one URL and use # and Javascript refresh to paginate pages, and we are wondering if we transition to the Google's recommended pagination if we will see an improvement in traffic. Has anyone gone though a similar transition? What was the result? Did you see an improvement in traffic?
Intermediate & Advanced SEO | | nicole.healthline0 -
Competitior 'scraped' entire site - pretty much - what to do?
I just discovered a competitor in the insurance lead generation space has completely copied my client's site's architecture, page names, titles, even the form, tweaking a word or two here or there to prevent 100% 'scraping'. We put a lot of time into the site, only to have everything 'stolen'. What can we do about this? My client is very upset. I looked into filing a 'scraper' report through Google but the slight modifications to content technically don't make it a 'scraped' site. Please advise to what course of action we can take, if any. Thanks,
Intermediate & Advanced SEO | | seagreen
Greg0