Huge spike in 404s and 500 erros
-
I'm curious what might cause an inordinate amount of 404s in the reporting from SEOMoz's dashboard.
I'm exploring links that are marked as 404s and they are (for the most part) working. I talked with the sysadmin and there were no outages this weekend. We also had a number of 500 errors reported in Webmaster Tools but everything seems to be up.
Any ideas?
-
Maybe submit a support ticket to SEOmoz to see if the 404's might have been false positives.
-
The interesting thing is that SEOMoz threw a bunch of 404s that, overall, are not 404s whereas Webmaster Tools is showing discontinued products which makes sense. We didn't see any outages this weekend so I'm a bit confused.
-
If SEOmoz and Google Webmaster Tools are both reporting errors, I would guess there actually was a problem with your site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Old URLs that have 301s to 404s not being de-indexed.
We have a scenario on a domain that recently moved to enforcing SSL. If a page is requested over non-ssl (http) requests, the server automatically redirects to the SSL (https) URL using a good old fashioned 301. This is great except for any page that no longer exists, in which case you get a 301 going to a 404. Here's what I mean. Case 1 - Good page: http://domain.com/goodpage -> 301 -> https://domain.com/goodpage -> 200 Case 2 - Bad page that no longer exists: http://domain.com/badpage -> 301 -> https://domain.com/badpage -> 404 Google is correctly re-indexing all the "good" pages and just displaying search results going directly to the https version. Google is stubbornly hanging on to all the "bad" pages and serving up the original URL (http://domain.com/badpage) unless we submit a removal request. But there are hundreds of these pages and this is starting to suck. Note: the load balancer does the SSL enforcement, not the CMS. So we can't detect a 404 and serve it up first. The CMS does the 404'ing. Any ideas on the best way to approach this problem? Or any idea why Google is holding on to all the old "bad" pages that no longer exist, given that we've clearly indicated with 301s that no one is home at the old address?
Intermediate & Advanced SEO | | boxclever0 -
Would changing permalink structure of 7,500 articles be good or bad?
Morning everyone, I'm the tech at a large men's lifestyle publisher and we're currently running the old /year/month/ URL structure in Wordpress. Now I've read countless articles about pro's and con's of month date vs post type formats (/2016/06/sample-post/ vs /sample-post/) and considering we produce both evergreen and daily news content we're stuck with making a decision. Currently we receive about 10,000 organic referrals per day (has been stuck at this for 12 months) but considering we have 7,500 articles, have 10 full-time staff and have been around for close to 7 years we think we're underperforming. Now providing we 301 redirect every old article to the new structure is there any other reason not to do this change? Any advice would be appreciated. Axps36D
Intermediate & Advanced SEO | | lucwiesman0 -
HUGELY different ranking for a keyword between Bing and Google. Looking for ideas.
We rank really well for a brand in Bing (#2 behind manufacturer, and it's a competitive name) but are in about 15th place in Google. Any suggestions on what could be hurting us in Google are welcome!
Intermediate & Advanced SEO | | absoauto1 -
Hacked Wordpress Site! So many 404s
So I had a site that I worked on get hacked. We eliminated the URLs, found the vulnerability (Bluehost!) and rolled back the site. BUT they got into the Google Search Console and indexed a LOT of pages. These pages are now 404 errors and I asked the robots.txt file to make them noindex. The problem is that Google is placing a "this site may be hacked" on the search listing. I asked Google to reevaluate it and it was approved by there are still 80,000 404 errors being shown and it still believes that the uploaded files that we deleted should be showing. Doing a site search STILL shows the infected pages though and it has been a month. Any insight would definitely be helpful. Thanks!
Intermediate & Advanced SEO | | mattdinbrooklyn0 -
How does Google index pagination variables in Ajax snapshots? We're seeing random huge variables.
We're using the Google snapshot method to index dynamic Ajax content. Some of this content is from tables using pagination. The pagination is tracked with a var in the hash, something like: #!home/?view_3_page=1 We're seeing all sorts of calls from Google now with huge numbers for these URL variables that we are not generating with our snapshots. Like this: #!home/?view_3_page=10099089 These aren't trivial since each snapshot represents a server load, so we'd like these vars to only represent what's returned by the snapshots. Is Google generating random numbers going fishing for content? If so, is this something we can control or minimize?
Intermediate & Advanced SEO | | sitestrux0 -
Need help with huge spike in duplicate content and page title errors.
Hi Mozzers, I come asking for help. I've had a client who's reported a staggering increase in errors of over 18,000! The errors include duplicate content and page titles. I think I've found the culprit and it's the News & Events calender on the following page: http://www.newmanshs.wa.edu.au/news-events/events/07-2013 Essentially each day of the week is an individual link, and events stretching over a few days get reported as duplicate content. Do you have any ideas how to fix this issue? Any help is much appreciated. Cheers
Intermediate & Advanced SEO | | bamcreative0 -
Huge google index with un-relevant pages
Hi, i run a site about sport matches, every match has a page and the pages are generated automatically from the DB. pages are not duplicated, but over time some look a little bit similar. after a match finishes it has no internal links or sitemap entry, but it's reachable by direct URL and continues to be on google index. so over time we have more than 100,000 indexed pages. since past matches have no significance and they're not linked and a match can repeat and it may look like duplicate content....what you suggest us to do: when a match is finished - not linked, but appears on the index and SERP 301 redirect the match Page to the match Category which is a higher hierarchy and is always relevant? use rel=canonical to the match Category do nothing.... *301 redirect will shrink my index status, some say a high index status is good... *is it safe to 301 redirect 100,000 pages at once - wouldn't it look strange to google? *would canonical remove the past matches pages from the index? what do you think? Thanks, Assaf.
Intermediate & Advanced SEO | | stassaf0 -
$1,500 question
I have $1,500 to spend to promote 8 years old website. Almost no SEO work was done for the site in the past 3-4 years. The site has a couple hundreds (around 300) external backlinks pointing to the homepage, and around 30 backlinks pointing to internal pages. It gets around 60% traffic from referring sites, 30% direct, and 10% from SE. The homepage has PR 4. It ranks around 70th place in Google rankings for one of the main keywords. No keyword research has been done for the site. Looking for long term benefits. What would be the best way, in your opinion, to spend this money?
Intermediate & Advanced SEO | | _Z_0