Increase in authorization permission errors (Access Denied - Error 403)
-
Hi MOZ community,
Since last week when I changed my theme in a WP installation I noticed (in WMT and MOZ tool) that I have increased number in authorization permission errors (error 403-forbidden).
What happens is that I received a 403 error for almost every single URL of my site. All these URLs are not "real" ones but they all have my email in the end.
i.e. I get an 403 error for the "/contact/support@fantasylogic.com" whilst the real URL is just "/contact/"
This happens, as I said, for almost every single page of my site. I have no other crawling or indexation issues, all URLs are correctly indexed. All new pages are correctly indexed as well. URIs ending with "support@fantasylogic.com" are not indexed off course.
WP and all installed plugins & theme are on the latest available release. For SEO purposes I use Yoast SEO WP plugin. The site in questions is: fantasylogic.com
Any suggestions would be highly appreciated.
Thank you in advance
-
Thank you!
It makes sense. On the top left corner of the bar email is not a link. So I suppose this is not where the fault is located. BUT, I had forgotten the mailto: prefix in the email link in the footer. I changed it there. Let's hope that was the case
Thank you for the feedback
-
It is because your email link in your top bar is not formed correctly. You just have your email address, you need to add mailto: in front of it.
-
No. Neither AWS nor S3. The main issue is that the URLs that are sending 403 error are not "real". I mean that they are the same URLs as the existing/real ones but they have my email (fantasylogic.com/xxxxxxxxxx/support@fantasylogic.com/) in the end. The "normal" URL (fantasylogic.com/xxxxxxxxxx/) is indexed correctly and at the same time I receive a 403 error for "fantasylogic.com/xxxxxxxxxx/support@fantasylogic.com/". How and why these URIs are created? And why are they reachable to crawlers?
-
Are you hosted on AWS or using resources from a S3 bucket? By default those resources do not send a 404 for not found resources, they send a 403 for some reason.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Error Code 902 & 403
Several thousand of these popped up on my Crawl Report and the links appear to be searches, i.e. below 902: http://thespacecollective.com/index.php?route=product/search&tag=nasa+ma-1+jacket%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F 403: http://thespacecollective.com/index.php?route=product/search&tag=periodic+table+tshirt%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F%2F I don't want Moz, let alone Google finding this kind of nonsensical link but I don't exactly know what the problem is or how to fix it. Am I right in thinking these are pages people have searched for? Can anyone shed light on this please?
Moz Pro | | moon-boots0 -
Can increasing website pages decrease domain authority?
Hello Mozzers! Say there is a website with 100 pages and a domain authority of 25. If the number of pages on this website increases to 10,000 can that decrease its domain authority or affect it in any way?
Moz Pro | | MozAddict0 -
Alternatives to Domain Authority
Is there another metric (or metrics) that is similar to the Moz Domain Authority that doesn't fluctuate as much? I've been reporting it on a monthly basis for about a year, but lately it seems like it's been going up and down every few weeks.
Moz Pro | | pbhatt0 -
Allow only Rogerbot, not googlebot nor undesired access
I'm in the middle of site development and wanted to start crawling my site with Rogerbot, but avoid googlebot or similar to crawl it. Actually mi site is protected with login (basic Joomla offline site, user and password required) so I thought that a good solution would be to remove that limitation and use .htaccess to protect with password for all users, except Rogerbot. Reading here and there, it seems that practice is not very recommended as it could lead to security holes - any other user could see allowed agents and emulate them. Ok, maybe it's necessary to be a hacker/cracker to get that info - or experienced developer - but was not able to get a clear information how to proceed in a secure way. The other solution was to continue using Joomla's access limitation for all, again, except Rogerbot. Still not sure how possible would that be. Mostly, my question is, how do you work on your site before wanting to be indexed from Google or similar, independently if you use or not some CMS? Is there some other way to perform it?
Moz Pro | | MilosMilcom
I would love to have my site ready and crawled before launching it and avoid fixing issues afterwards... Thanks in advance.0 -
Why would my domain authority keep changing when I refresh?
Hi All, I've noticed that my domain authority is changing every time I refresh SEO MOz. In some instances it increases and other time stays the same as a previous month. Is this just a mid-transitional stage? or is there a potential bug in the system? It's making it difficult to know what is really happening.
Moz Pro | | Benjamin3790 -
Account access
We're a digital agency and I am wondering how we gan give other colleagues access to the campaigns without buying extra accounts.
Moz Pro | | RutgerMeekers2 -
Domain Authority
Has anyone ever experienced a domain authority of 1, 0 links for web 2.0 websites like tumblr, wordpress, weebly, etc? I know it's normal to see a page authority of 1 if there are no links to a url or if opensite explorer hasn't been updated. Is this some kind of bug with opensite explorer?
Moz Pro | | theanglemedia0