Thousands of 404s
-
Hi there,
I'm working on a site that has a ridiculous number of 404s being returned by webmaster tools. We believe this was because there was an onpage error that was amending the urls and adding in folders that shouldn't have been in a big spiral i.e. /salons/uk/teeth became something like /salons/uk/teeth/salons/edinburgh/hair/teeth...
Anyway, we think the issue is now sorted, but these pages were indexed it seems, and so it looks like Google is still searching for them when it crawls the site. What's my best move? It's the sheers volume (over 13,000) that has me concerned so I thought it best to seek some expert advice before continuing.
Thanks in advance!
-
As it's all sorted now, I really wouldn't worry about them too much. You can use the remove URL functionality in WMT, but this is a manual process so I wouldn't do this. If I were in your position, I'd probably just let the pages keep 404ing'. After a bit, Google will usually stop trying to recrawl the 404 pages. Right now they are probably trying to recrawl incase the 404 was an accident.
If it's causing a bandwidth problem, you can solve with a robots.txt as suggested earlier.
-
Hi Philip!
If these URL's are already indexed, you should 301 Redirect them to the right URL (if they by chance have some inbound links). You could also try the URL removal tool from Google (see https://support.google.com/webmasters/answer/1663416) if all you want is to get rid of them.
Good luck, hope this helps.
//Anders
-
Hi Philip,
If all the urls have the same URL pattern, I would give it a try adding the structure to the robots.txt so you'll prevent Google from crawling the pages. Even better would be if you could add the noindex tags to the page.
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sitemap.xml strategy for site with thousands of pages
I have a client that has a HUGE website with thousands of product pages. We don't currently have a sitemap.xml because it would take so much power to map the sitemap. I have thought about creating a sitemap for the key pages on the website - but didn't want to hurt the SEO on the thousands of product pages. If you have a sitemap.xml that only has some of the pages on your site - will it negatively impact the other pages, that Google has indexed - but are not listed on the sitemap.xml.
Technical SEO | | jerrico10 -
Huge number of crawl anomalies and 404s - non- existent urls
Hi there, Our site was redesigned at the end of January 2020. Since the new site was launched we have seen a big drop in impressions (50-60%) and also a big drop in total and organic traffic (again 50-60%) when compared to the old site. I know in the current climate some businesses will see a drop in traffic, however we are a tech business and some of our core search terms have increased in search volume as a result of remote-working. According to search console there are 82k urls excluded from coverage - the majority of these are classed as 'crawl anomaly' and there are 250+ 404's - almost all of the urls are non-existent, they have our root domain with a string of random characters on the end. Here are a couple of examples: root.domain.com/96jumblestorebb42a1c2320800306682 root.domain.com/01sportsplazac9a3c52miz-63jth601 root.domain.com/39autoparts-agency26be7ff420582220 root.domain.com/05open-kitchenaf69a7a29510363 Is this a cause for concern? I'm thinking that all of these random fake urls could be preventing genuine pages from being indexed / or they could be having an impact on our search visibility. Can somebody advise please? Thanks!
Technical SEO | | nicola-10 -
False Soft 404s, Shadow Bans, and Old User Generated Content
What are the best ways to keep old user generated content (UGC) pages from being falsely flagged by Google as soft 404s? I have tried HTML site maps to make sure no page is an orphaned but that has not solved the problem. Could crawled currently not indexed by explained by a shadow ban from Google? I have had problems with Google removing pages from SERPs without telling me about it. It looks like a lot of content is not ranking due to its age. How can one go about refreshing UGC without changing the work of the user?
Technical SEO | | STDCarriers0 -
1,300,000 404s
Just moved a WordProcess site over to a new host and skinned it. Found out after the fact that the site had been hacked - the db is clean. I did notice at first there were a lot of 404s being generated, so I setup a script to capture and then return a 410 page gone - and then the plan was to submit them to have them removed from the index - thinking there was a manageable number But, when I looked at Google WebMaster Tools there was over 1,300,000 404 errors - see attachment. My puny attempt to solve this problem seems to need more of an industrial size solution. My question, is that what would be the best way to deal with this? Not all of the pages are indexed in google - only 637 index but you can only see about 150 in the index. Where bing is another story saying that over 2,700 pages index but only can see about 200. How is this affecting any future rankings - they do not rank well, as I found out because of very slow page load speed and of course the hacks? The link profile looking at Google is OK, and there are no messages in Google Webmaster tools. am5cMz2
Technical SEO | | Runner20090 -
Google WMT continues reporting fixed 404s - why?
I work with a news site that had a heavy restructuring last spring. This involved removing many pages that were duplicates, tags, etc. Since then, we have taken very careful steps to remove all links coming into these deleted pages, but for some reason, WMT continues to report them. By last August, we had cleared over 10k 404s to our site, but this lasted only for about 2 months and they started coming back. The "linked from" gives no data, and other crawlers like seomoz aren't detecting any of these errors. The pages aren't in the sitemap and I've confirmed that they're not really being linked from from anywhere. Why do these pages keep coming back? Should I even bother removing them over and over again? Thanks -Juanita
Technical SEO | | VoxxiVoxxi0 -
Disallow: /search/ in robots but soft 404s are still showing in GWT and Google search?
Hi guys, I've already added the following syntax in robots.txt to prevent search engines in crawling dynamic pages produce by my website's search feature: Disallow: /search/. But soft 404s are still showing in Google Webmaster Tools. Do I need to wait(it's been almost a week since I've added the following syntax in my robots.txt)? Thanks, JC
Technical SEO | | esiow20130 -
Google indexing thousands crazy search results with %25253
In GWT I started seeing very strange pages indexed a few weeks, and Google is no reporting over 21,000 of pages (blocked by robots.txt) with weird URLs like this: http://www.francesphotography.com/?s=no-results:no-results%25252525252525253Ano-results%2525252525252525253Ano-results%252525252525252525253Ano-results%252525252525252525253Ano-results%252525252525252525253Ano-results%252525252525252525253Ano-results%25252525252525252525253Ano-results%25252525252525252525253Ano-results%2525252525252525252525253Adanna&cat=no-results http://www.francesphotography.com/?s=no-results:no-results%2525253Ano-results%25252525253Ano-results%25252525253Ano-results%25252525253Ano-results%2525252525253Ano-results%25252525252525253Ano-results%25252525252525253Ano-results%25252525252525253Adanna&cat=no-results The current robots.txt looks like this: User-agent: *
Technical SEO | | BoulderJoe
Disallow: /wp-content Disallow: /wp-admin Disallow: /wp-includes
Disallow: /data
Disallow: /slideshows
Disallow: /page/*/?s=
Disallow: /?s=
Disallow: /search This website is running an up to date WP install with Yoast's Google Analytics and SEO plug-in. I can't point to anything specific that happened with the site when these URLs started appearing even after I modified the robots.txt. What can be done to try and stop Google from creating and indexing these goofy URLs? I see lots of sites having this issue when I search in Google, but no one seems to have a solution.0 -
Search for 404s on Sandbox
Can I verify an IP in google webmaster tools to search for any 404s? Or maybe i could do it with seomoz tools? Thanks!
Technical SEO | | tylerfraser0