Client error 404
-
I have got a lot (100+) of 404´s. I got more the last time, so I rearranged the whole site. I even changed it from .php to .html. I have went to the web hotel to delete all of the .php files from the main server. Still, I got after yesterdays crawl 404´s on my (deleted) .php sites.
There is also other links that has an error, but aren't there. Maybe those pages were there before the sites remodelling, but I don't think so because .html sites is also affected.
How can this be happening?
-
I am using SEOMoz crawler and I am using Dreamwaever. it was Dreamwaever that didn´t do its job. It was supposed to change in all of the documents but of some reason jumped over, not all, but some files.
In other words. All ok with me.
Sorry.
-
Hi Tobias - how are you checking 404s, are you using SEOMoz crawl diagnostics?
If so, export the CSV! Many people dont do this yet there is a plethora of info there.
- Open the csv, and sort by the "4xx" column so that all of the 'TRUE' cell values are at the top.
- Delete / Hide / Move all columns except: URL, 4xx, TIME CRAWLED and REFERRER
Now you can see which URLs are 404,'ing, how the crawl found them (where the link is that sent the crawler there) and what time it crawled (in case youve fixed it since.
Its easy and its thorough.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Soft 404 in Search Console
Search console is showing quite a lot of soft 404 pages on my site, but when I click on the links, the pages are all there. Is there a reason for this? It's a pretty big site - I'm getting 141 soft 404s from about 20,000 pages
Technical SEO | | abisti20 -
404 Errors for Form Generated Pages - No index, no follow or 301 redirect
Hi there I wonder if someone can help me out and provide the best solution for a problem with form generated pages. I have blocked the search results pages from being indexed by using the 'no index' tag, and I wondered if I should take this approach for the following pages. I have seen a huge increase in 404 errors since the new site structure and forms being filled in. This is because every time a form is filled in, this generates a new page, which only Google Search Console is reporting as a 404. Whilst some 404's can be explained and resolved, I wondered what is best to prevent Google from crawling these pages, like this: mydomain.com/webapp/wcs/stores/servlet/TopCategoriesDisplay?langId=-1&storeId=90&catalogId=1008&homePage=Y Implement 301 redirect using rules, which will mean that all these pages will redirect to the homepage. Whilst in theory this will protect any linked to pages, it does not resolve this issue of why GSC is recording as 404's in the first place. Also could come across to Google as 100,000+ redirected links, which might look spammy. Place No index tag on these pages too, so they will not get picked up, in the same way the search result pages are not being indexed. Block in robots - this will prevent any 'result' pages being crawled, which will improve the crawl time currently being taken up. However, I'm not entirely sure if the block will be possible? I would need to block anything after the domain/webapp/wcs/stores/servlet/TopCategoriesDisplay?. Hopefully this is possible? The no index tag will take time to set up, as needs to be scheduled in with development team, but the robots.txt will be an quicker fix as this can be done in GSC. I really appreciate any feedback on this one. Many thanks
Technical SEO | | Ric_McHale0 -
Homepage De-Indexed - No Errors, No Warnings
Hi, I am currently working on this project. Sometime between March 7th & 8th homepage was de-indexed. The rest of the pages are there. Found it out through decreased traffic on GA. No notifications of any kind of penalty/errors recieved. Tried to manually re-index through "Fetch as Google" in WMT to no avail. Site is redirected to https. Any suggestions would be highly appreciated. Thank you in advance.
Technical SEO | | gpapatheodorou0 -
GWT Error for RSS Feed
Hello there! I have a new RSS feed that I submitted to GWT. The feed validates no problemo on http://validator.w3.org/feed/ and also when I test the feed in GWT it comes back aok, finds all the content with "No errors found". I recently got a issue with GWT not being able to read the rss feed, error on line 697 "We were unable to read your Sitemap. It may contain an entry we are unable to recognize. Please validate your Sitemap before resubmitting." I am assuming this is an intermittent issue, possibly we had a server issue on the site last night etc. I am checking with my developer this morning. Wanted to see if anyone else had this issue, if it resolved itself, etc. Thanks!
Technical SEO | | CleverPhD0 -
Have a client whose name is Scott Gable and his profession is photography
When I do a search for Scott Gable (just his name) google comes up like this (without the sitelinks): http://chrle.us/MGer When I add photography to the search query it comes up like this (with the sitelinks structured below) http://chrle.us/MHXy is his name that common that the full sitelinks wouldn't appear below it on the scott gable search? Would 301 redirecting scottgablephotography.com to scottgable.com help fix this?
Technical SEO | | callmeed0 -
Dealing with 404 pages
I built a blog on my root domain while I worked on another part of the site at .....co.uk/alpha I was really careful not to have any links go to alpha - but it seems google found and indexed it. The problem is that part of alpha was a copy of the blog - so now soon we have a lot of duplicate content. The /alpha part is now ready to be taken over to the root domain, the initial plan was to then delete /alpha. But now that its indexed I'm worried that Ill have all these 404 pages. I'm not sure what to do.. I know I can just do a 301 redirect for all those pages to go to the other ones in case a link comes on but I need to delete those pages as the server is already very slow. Or does a 301 redirect mean that I don't need those pages anymore? Will those pages still get indexed by google as separate pages? Please assist.
Technical SEO | | borderbound0 -
Duplicate page content errors in SEOmoz
Hi everyone, we just launched this new site and I just ran it through SEOmoz and I got a bunch of duplicate page content errors. Here's one example -- it says these 3 are duplicate content: http://www.alicealan.com/collection/alexa-black-3inch http://www.alicealan.com/collection/alexa-camel-3inch http://www.alicealan.com/collection/alexa-gray-3inch You'll see from the pages that the titles, images and small pieces of the copy are all unique -- but there is some copy that is the same (after all, these are pretty much the same shoe, just a different color). So, why am I getting this error and is there any best way to address? Thanks so much!
Technical SEO | | ketanmv
Ketan0 -
Is this 404 page indexed?
I have a URL that when searched for shows up in the Google index as the first result but does not have any title or description attached to it. When you click on the link it goes to a 404 page. Is it simply that Google is removing it from the index and is in some sort of transitional phase or could there be another reason.
Technical SEO | | bfinternet0