Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
403s vs 404s
-
Hey all,
Recently launched a new site on S3, and old pages that I haven't been able to redirect yet are showing up as 403s instead of 404s.
Is a 403 worse than a 404? They're both just basically dead-ends, right? (I have read the status code guides, yes.)
-
Oh I'm sorry I clearly misunderstood the question.
I have not seen any studies or testing done on this, but I have to assume that they are ignored by spiders entirely. I certainly don't think they are more damaging than a 404 would be. A 404 tends to be ignored and only registered if a certain amount of time passes and the page is still not found. Google doesn't make it a habit to instantly remove URLs unless you ask them to.
At the very worst, the 403/404 error would de-index that particular URL but this should not affect the rankings of your other pages and your actual site. And I think it'll take at least a good 30 days before Google will stop crawling those. That said, it shouldn't be crawling them at all if there aren't any links pointing to them either internally or externally. And if there are links pointing to the pages in question, you should be redirecting them via 301. That is of course if they are links you want.
Hope this was more helpful.
-
Hi Jesse,
Thanks for your response!
I understand the reason the 403s are happening; I was more curious as to whether they are more damaging to rankings when hit by a spider than a 404 would be
-
403s are forbiddens that are only returned if the server is told to block access to the file. If the site had been built with Wordpress in the past and has directories that match current directories, it may be returning 403 errors as the sitemap differs..
This is hard to explain and I think my wording it is confusing.
Say you had on your old site domain.com/blog/ and that went to your blog's index but now you have domain.com/blog/contents.html as your index. Well the /blog/ command would be trying to pull a directory and your server would normally automatically return a 403 forbidden for such requests.
Does this make sense? Might not be what's going on, but it's one possibility.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are there ways to avoid false positive "soft 404s" by Google
Sometimes I get alerts from Google Search Console that it has detected soft 404s on different websites, and since I take great care to never have true soft 404s, they are always false positives. Today I got one on a website that has pages promoting some events. The language on the page for one event that has sold out says that "tickets are no longer available" which seems to have tripped up Google into thinking the page is a soft 404. It's kind of incredible to me that in the current era we're in, with things like chatGPT that Google doesn't seem to understand natural language. But that has me thinking, are there some strategies or best practices we can use in how we write copy on the page so Google doesn't flag it as soft 404? It seems like anything that could tell a user that an item isn't available could trip it up into thinking it is a 404. In the case of my page, it's actually important information we need to tell the public that an event has sold out, but to use their interest in that event to promote other events. so I don't want the page deindexed or not to rank well!
Technical SEO | | IrvCo_Interactive0 -
DNS vs IIS redirection
I'm working on a project where a site has gone through a rebrand and is therefore also moving to a new domain name. Some pages have been merged on the new site so it's not a lift and shift job and so I'm writing up a redirect plan. Their IT dept have asked if we want redirects done by DNS redirect or IIS redirect. Which one will allow us to have redirects on a page level and not a domain level? I think IIS may be the right route but would love your thoughts on this please.
Technical SEO | | Marketing_Today1 -
Robots.txt on http vs. https
We recently changed our domain from http to https. When a user enters any URL on http, there is an global 301 redirect to the same page on https. I cannot find instructions about what to do with robots.txt. Now that https is the canonical version, should I block the http-Version with robots.txt? Strangely, I cannot find a single ressource about this...
Technical SEO | | zeepartner0 -
Subdomain vs Main Domain Penalties
We have a client who's main root.com domain is currently penalized by Google, but the subdomain.root.com is appearing very well. We're stumped - any ideas why?
Technical SEO | | Prospector-Plastics0 -
Www vs non-www which is better?
Is it better to have all your pages point to the www version or non www version.
Technical SEO | | bronxpad0 -
Internal search : rel=canonical vs noindex vs robots.txt
Hi everyone, I have a website with a lot of internal search results pages indexed. I'm not asking if they should be indexed or not, I know they should not according to Google's guidelines. And they make a bunch of duplicated pages so I want to solve this problem. The thing is, if I noindex them, the site is gonna lose a non-negligible chunk of traffic : nearly 13% according to google analytics !!! I thought of blocking them in robots.txt. This solution would not keep them out of the index. But the pages appearing in GG SERPS would then look empty (no title, no description), thus their CTR would plummet and I would lose a bit of traffic too... The last idea I had was to use a rel=canonical tag pointing to the original search page (that is empty, without results), but it would probably have the same effect as noindexing them, wouldn't it ? (never tried so I'm not sure of this) Of course I did some research on the subject, but each of my finding recommanded one of the 3 methods only ! One even recommanded noindex+robots.txt block which is stupid because the noindex would then be useless... Is there somebody who can tell me which option is the best to keep this traffic ? Thanks a million
Technical SEO | | JohannCR0 -
Ror.xml vs sitemap.xml
Hey Mozzers, So I've been reading somethings lately and some are saying that the top search engines do not use ror.xml sitemap but focus just on the sitemap.xml. Is that true? Do you use ror? if so, for what purpose, products, "special articles", other uses? Can sitemap be sufficient for all of those? Thank you, Vadim
Technical SEO | | vijayvasu0 -
Syndication: Link back vs. Rel Canonical
For content syndication, let's say I have the choice of (1) a link back or (2) a cross domain rel canonical to the original page, which one would you choose and why? (I'm trying to pick the best option to save dev time!) I'm also curious to know what would be the difference in SERPs between the link back & the canonical solution for the original publisher and for sydication partners? (I would prefer not having the syndication partners disappeared entirely from SERPs, I just want to make sure I'm first!) A side question: What's the difference in real life between the Google source attribution tag & the cross domain rel canonical tag? Thanks! PS: Don't know if it helps but note that we can syndicate 1 article to multiple syndication partners (It would't be impossible to see 1 article syndicated to 50 partners)
Technical SEO | | raywatson0