403s vs 404s
-
Hey all,
Recently launched a new site on S3, and old pages that I haven't been able to redirect yet are showing up as 403s instead of 404s.
Is a 403 worse than a 404? They're both just basically dead-ends, right? (I have read the status code guides, yes.)
-
Oh I'm sorry I clearly misunderstood the question.
I have not seen any studies or testing done on this, but I have to assume that they are ignored by spiders entirely. I certainly don't think they are more damaging than a 404 would be. A 404 tends to be ignored and only registered if a certain amount of time passes and the page is still not found. Google doesn't make it a habit to instantly remove URLs unless you ask them to.
At the very worst, the 403/404 error would de-index that particular URL but this should not affect the rankings of your other pages and your actual site. And I think it'll take at least a good 30 days before Google will stop crawling those. That said, it shouldn't be crawling them at all if there aren't any links pointing to them either internally or externally. And if there are links pointing to the pages in question, you should be redirecting them via 301. That is of course if they are links you want.
Hope this was more helpful.
-
Hi Jesse,
Thanks for your response!
I understand the reason the 403s are happening; I was more curious as to whether they are more damaging to rankings when hit by a spider than a 404 would be
-
403s are forbiddens that are only returned if the server is told to block access to the file. If the site had been built with Wordpress in the past and has directories that match current directories, it may be returning 403 errors as the sitemap differs..
This is hard to explain and I think my wording it is confusing.
Say you had on your old site domain.com/blog/ and that went to your blog's index but now you have domain.com/blog/contents.html as your index. Well the /blog/ command would be trying to pull a directory and your server would normally automatically return a 403 forbidden for such requests.
Does this make sense? Might not be what's going on, but it's one possibility.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
JSON-LD meta data: Do you have any rules/recommendations for using BlogPosting vs Article?
Dear Moz Community. I'm looking at moving from in-line Microdata in the HTML to JSON-LD on the web pages that I manage. Seems a far simpler solution having all the meta data in one place - especially for trouble shooting! With this in mind I've started to change the page templates on my personal site before I tackle the ones for my eCommerce site. I've made a start, and I'm still working on the templates producing some default values (like if a page doesn't have an associated image) but have been wondering if any of you have any rules/recommendations for using BlogPosting vs Article? I'd call this type of page an Article:
Technical SEO | | andystorey
https://cycling-jersey-collection.com/browse-collection/selle-italia-chinol-seb-bennotto-1982-team-jersey Whereas this page is from the /blog so that should probably be a BlogPosting:
https://cycling-jersey-collection.com/blog/2017-worldtour-team-jerseys I've used the following resources but it would be great to get a discussion on here.
https://yoast.com/structured-data-schema-ultimate-guide/
https://developers.google.com/search/docs/data-types/data-type-selector
https://search.google.com/structured-data/testing-tool/u/0/ I'm keen to get this 100% right as once this is done I'm going to drive through some further changes to get some progress on things like this: https://moz.com/blog/ranking-zero-seo-for-answers
https://moz.com/blog/what-we-learned-analyzing-featured-snippets Kind Regards andy moz-screenshot.jpg1 -
Https vs http two different domains?
If i visit mywebsite.com.au, www.mywebsite.com.au and http://www.mywebsite.com.au - i get one website BUT if I visit https://www.mywebsite.com.au I get a different website - I also get a untrusted website warning The logo in the bottom right of the https: website is the name of the webdesigner where the website is hosted. Is this a normal practice?
Technical SEO | | GardenBeet0 -
Domain vs Sub Domain and Rankings
Hi All Wanting some advice. I have a client which has a number of individual centres that are part of an umbrella organisation. Each individual centre has its own web site and some of these sites have similar (not duplicate content) products and services. Currently the individual centres are sub domains of the umbrella organisation. i.e. Umbrella organisation www.organisation.org.au Individual centres are sub domains i.e. www.centre1.organisation.org.au, www.centre2.organisation.org.au etc. I'm feeling that perhaps this setup might be affecting the rankings of the individual sites because they are sub domains. Would love to hear some thoughts or experience on this and whether its worth going through the process of migrating the individual centre domains. Thanks Ian
Technical SEO | | iragless0 -
Time to deindexing: WMT Request vs. Server not found
Google indexed some subdomains (13!) that were never supposed to exist, but apparently returned a 200 code when Google somehow crawled them. I can get these subdomains to return a "server not found" error by turning off wildcard subdomains at my DNS. I've been told that these subdomains will be deindexed just from this server not found error. I was going to use Webmaster Tools and verify each domain, but I'm on an economy goDaddy server and apparently subdomains just get forwarded to a directory, so subdomain.domain.com gets redirected to domain.com/subdomain. I'm not even sure with this being the case, if I can get WMT to recognize and remove these subdomains like that. Should I fret about this, or will the "server not found" message get Google to remove these soon enough?
Technical SEO | | erin_soc0 -
Duplicate content vs. less content
Hi, I run a site that is currently doing very well in google for the terms that we want. We are 1,2 or 3 for our 4 targeted terms, but havent been able to jump to number one in two categories that I would really like to. In looking at our site, I didn't realize we have a TON of duplicate content as seen by SEO moz and I guess google. It appears to be coming from our forum, we use drupal. RIght now we have over 4500 pages of duplicate content. Here is my question: How much is this hurting us as we are ranking high. Is it better to kill the forum (which is more community service than business) and have a very tight site SEO-wise, or leave the forum even with the duplicate content. Thanks for your help. Erik
Technical SEO | | SurfingNosara0 -
Internal search : rel=canonical vs noindex vs robots.txt
Hi everyone, I have a website with a lot of internal search results pages indexed. I'm not asking if they should be indexed or not, I know they should not according to Google's guidelines. And they make a bunch of duplicated pages so I want to solve this problem. The thing is, if I noindex them, the site is gonna lose a non-negligible chunk of traffic : nearly 13% according to google analytics !!! I thought of blocking them in robots.txt. This solution would not keep them out of the index. But the pages appearing in GG SERPS would then look empty (no title, no description), thus their CTR would plummet and I would lose a bit of traffic too... The last idea I had was to use a rel=canonical tag pointing to the original search page (that is empty, without results), but it would probably have the same effect as noindexing them, wouldn't it ? (never tried so I'm not sure of this) Of course I did some research on the subject, but each of my finding recommanded one of the 3 methods only ! One even recommanded noindex+robots.txt block which is stupid because the noindex would then be useless... Is there somebody who can tell me which option is the best to keep this traffic ? Thanks a million
Technical SEO | | JohannCR0 -
Backlink: External blog Vs. Internal blog. Which is the best?
Hi, some weeks ago a created a blog: mykeyword.wordpress.com Some one told me that it has got more trust that a "normal" www.mykeyword.com
Technical SEO | | Greenman
Is it true? So, i wrote some articles and dropped a guide (linking inside to mysite.com) to blog. My question is:
Right now i'm writing a lot of article ad i'm looking for the best channel where publish my content (post with link inside). My focus is improving quantity and quality of backlinks. Which way must i use? 1. Use my mykeyword.wordpress.com (give freshness to blog and new backlink)
2. Create ad internal blog mysite.com/blog and add article (without link?)
3. "Don't lose time" - Put new article only in external blog that will link to my site. I must manage a lot of new sites and i should increase SERP position. So, i have to choose the right way right now. Thanks 😉0 -
SEO-MOZ bar question on root vs subdomain / canonicalization issues
When I look at the SEO-MOZ bar for our site and click next to subdomain (# links from #domains) it shows my main incoming links etc. but when I click on root domain ity shows mydomain/default.asp and 4 incoming links as well as a message that says this url redirects to another url. Does this imply canonicalization issues or is there a 301 redirect to my non /default.asp correcting this issue. Thanks kindly, Howard
Technical SEO | | mrkingsley0