403s vs 404s
-
Hey all,
Recently launched a new site on S3, and old pages that I haven't been able to redirect yet are showing up as 403s instead of 404s.
Is a 403 worse than a 404? They're both just basically dead-ends, right? (I have read the status code guides, yes.)
-
Oh I'm sorry I clearly misunderstood the question.
I have not seen any studies or testing done on this, but I have to assume that they are ignored by spiders entirely. I certainly don't think they are more damaging than a 404 would be. A 404 tends to be ignored and only registered if a certain amount of time passes and the page is still not found. Google doesn't make it a habit to instantly remove URLs unless you ask them to.
At the very worst, the 403/404 error would de-index that particular URL but this should not affect the rankings of your other pages and your actual site. And I think it'll take at least a good 30 days before Google will stop crawling those. That said, it shouldn't be crawling them at all if there aren't any links pointing to them either internally or externally. And if there are links pointing to the pages in question, you should be redirecting them via 301. That is of course if they are links you want.
Hope this was more helpful.
-
Hi Jesse,
Thanks for your response!
I understand the reason the 403s are happening; I was more curious as to whether they are more damaging to rankings when hit by a spider than a 404 would be
-
403s are forbiddens that are only returned if the server is told to block access to the file. If the site had been built with Wordpress in the past and has directories that match current directories, it may be returning 403 errors as the sitemap differs..
This is hard to explain and I think my wording it is confusing.
Say you had on your old site domain.com/blog/ and that went to your blog's index but now you have domain.com/blog/contents.html as your index. Well the /blog/ command would be trying to pull a directory and your server would normally automatically return a 403 forbidden for such requests.
Does this make sense? Might not be what's going on, but it's one possibility.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How much difference does .co.uk vs .com for SEO make?
My Website has a .com domain. However I have noticed that for local businesses all of them have a .co.uk (UK business) TLD (check plumbers southampton for example). I have also noticed that on checking my serp rankings, I'm on page 1 if searched on Google.com but page 2 if searched on google.co.uk. Now being UK based I would assume most of my customers will be redirected to google.co.uk so I'm wondering how much of an impact this actually makes? Would it be worth purchasing .co.uk domain and transferring my website to that? Or run them both at the same time and set up 301 direct on my .com to .co.uk? Thanks
Technical SEO | | Marvellous0 -
WMT "Index Status" vs Google search site:mydomain.com
Hi - I'm working for a client with a manual penalty. In their WMT account they have 2 pages indexed.If I search for "site:myclientsdomain.com" I get 175 results which is about right. I'm not sure what to make of the 2 indexed pages - any thoughts would be very appreciated. google-1.png google-2.png
Technical SEO | | JohnBolyard0 -
Robots.txt Download vs Cache
We made an update to the Robots.txt file this morning after the initial download of the robots.txt file. I then submitted the page through Fetch as Google bot to get the changes in asap. The cache time stamp on the page now shows Sep 27, 2013 15:35:28 GMT. I believe that would put the cache time stamp at about 6 hours ago. However the Blocked URLs tab in Google WMT shows the robots.txt last downloaded at 14 hours ago - and therefore it's showing the old file. This leads me to believe for the Robots.txt the cache date and the download time are independent. Is there anyway to get Google to recognize the new file other than waiting this out??
Technical SEO | | Rich_A0 -
Page Name vs Header
Hi! I was wondering if one of our knowledgeable community members can help me out: I use the drag & drop Intuit Website Builder. For each page, there is a field for the page "Name" and for the "Header". I understand the header is a standard element in a page. What exactly is the page "Name", how is different from the header and can/should I use the same text for both? Your input for a beginner like me is appreciated! I also have another question if you guys will indulge me: Intuit's support is terrible. You cannot get assistance over the phone, only through an annoying "chat" system. Wordpress has been recommended to me by multiple people. Are they really that good? how is their support?
Technical SEO | | Jorge1110 -
Google WMT continues reporting fixed 404s - why?
I work with a news site that had a heavy restructuring last spring. This involved removing many pages that were duplicates, tags, etc. Since then, we have taken very careful steps to remove all links coming into these deleted pages, but for some reason, WMT continues to report them. By last August, we had cleared over 10k 404s to our site, but this lasted only for about 2 months and they started coming back. The "linked from" gives no data, and other crawlers like seomoz aren't detecting any of these errors. The pages aren't in the sitemap and I've confirmed that they're not really being linked from from anywhere. Why do these pages keep coming back? Should I even bother removing them over and over again? Thanks -Juanita
Technical SEO | | VoxxiVoxxi0 -
Post vs page in Wordpress?
Hello there, I have a Wordpress site and would like to know if it is better to have 600 posts or 600 pages in terms of efficiency in the site. I would like to publish the content as pages, as I can have subapges,etc... and keep the path: www.website.com/page/subpage1... in terms of good SEO. This structure of using pages rahter than posts allow me to keep the path as stated above (with a category/post path I could not manage in this sense as a pile of articles is displayed although the path category/post in terms of SEO I understand would be good too). Thank you very much for your thoughts here as I would go for a page structure. Antonio
Technical SEO | | aalcocer20030 -
HTACCESS redirect vs. forwarding
I'm having trouble using htaccess redirect to redirect a subdomain to a new domain on a different server. Tech support at godaddy suggested I forward the subdomain. The subdomain has already been cached by google. Will forwarding in this way have the same affect (SEO wise) as an htaccess redirect??
Technical SEO | | triple90 -
Rel - canonical vs 301 redirect
I have multiple product pages on my site - what is better for rankings in your experiance? If I 301 the pages to 1 correct version of the product page - or if I rel caanonical to the one correct page?
Technical SEO | | DavidS-2820610