4XX client error with email address in URL
-
I have an unusual situation I have never seen before and I did not set up the server for this client. The 4XX error is a string of about 74 URLs similar to this:
http://www.websitename.com/about-us/info@websitename.com
I will be contacting the server host as well to troubleshoot this issue. Any ideas?
Thanks
-
Hi EliteVenu! I'm so glad Ryan pointed you in the right direction.
If that turns out to fix the problem, mind marking one or both of his responses as a "Good Answer?"
-
Great! Glad I could help.
-
That gave me the right direction to look in! A social icon plugin did not require the mailto in the dashboard settings, (as it only said "enter your email address here") and the theme wrote it as href in the theme's code. I looked at the source code, but overlooked this small detail. I removed the social icon email so I will see if it helps.
Thanks for the response!
-
Hi there! Tawny from the Help Team here - I think I can help provide a little bit of insight!
If you take a look at the Site Crawl report for this site's campaign and look at just the 4XX client errors, you'll see a Linking Page column in the table below the graph. That's the page from which our crawler arrived at the 404 page, and is where you can start looking for what went wrong.
I'd recommend taking a peek at that Linking Page's source code and searching for the email address - that's likely where you'll find the issue.
I hope this helps! Feel free to write in to us at help@moz.com if you still have questions and we'll do our best to help you out!
-
If that's what you're seeing it looks like someone used a relative href link instead of a mailto link for emails on the about us page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Need help fixing a duplicate content issue for my website. The moz crawl is show OMG my website with https:// and https://www. But I have never used the url https:// so I don’t understand why moz is showing this
Moz is showing my url with two different starts. Https:// and then the one I use https://www. The problem is I don’t think I have ever used the url without the www. at the start. How do I fix this?
Moz Bar | | jdp_uk0 -
Refering URL Does Not Exist
I'm getting 250 or so 401 errors which says the refering Url is: https://www.carburetor-parts.com/assets/manuals/Carter_ThermoQuad_Carburetor.pdf Interesting that the file does not exist. It may have at one time. At any rate can a pdf have a URL. I can't find the reference anywhere. Any ideas/ Thanks Mike
Moz Bar | | MikeCarbs0 -
Crawl report shows that it gets 4xx errors for pages that work fine. Why?
On the crawl report it has all these "Critical Crawler Issues". They all say "4xx Error", yet when i click on the link from the crawler report, it goes to a perfectly functioning page, not a 404 page or anything. If i click in it actually says it's a 403 error. It's all for pages generated by the IDX solution for our real estate website. Is Moz broken or am i missing something? Here are a couple examples: <dl class="crawl-page-details-list"> <dd class="crawl-page-details-list-emphasis">https://teamvivi.com/homes-for-sale-map-search/</dd> <dd class="crawl-page-details-list-emphasis"> <dl class="crawl-page-details-list"> <dd class="crawl-page-details-list-emphasis">https://teamvivi.com/email-alerts/</dd> </dl> </dd> </dl>
Moz Bar | | TeamViviRealEstate0 -
Moz keyword mention on-page counting errors
Hi. Moz is showing 18 mentions of the keyword 'street furniture' on this landing page https://www.broxap.com/street-furniture.html But I can only count 6 in total in the body copy and 13 if you include navigation links. This is the same on other pages too for that keyword. Does anyone know where it's counting these extra keywords from? I don't want to fall foul of keyword stuffing but as far as I can see we're not! Could Moz be miscalculating? Any help appreciated! Thanks Joe
Moz Bar | | iweb_agency0 -
Perplexed by last MOZ crawling duplicate content errors
In the last crawler issues report from MOZ I can see many many pages listed as duplicate content with 0 duplicate urls. Like this: http://imgur.com/fbikRVq I am puzzled, what does it mean?
Moz Bar | | max.favilli0 -
WP 4.0 Update Causing Major Duplicate Content Errors?
According to my moz analytics, my site has went through the roof with duplicate content. There's a nice Mozzer named Abe looking into this with me, but I'm wondering if it could be due to the WP 4.0 update. Has anyone else experienced an uptick like this before? I've never had any problems with the other updates. Thanks, Ruben
Moz Bar | | KempRugeLawGroup0 -
Getting 'Sorry, but that URL is inaccessible' error msg when trying to run On-Page Grader
I just signed up for MOZ Pro for the first time today. Tried to run the 'on-page grader' tool on some of my pages but I'm getting a 'Sorry, but that URL is inaccessible' error msg. I have verified against the robot.txt file that the pages are NOT blocking any crawlers. Can anybody help?
Moz Bar | | spinoki0 -
Ajax #! URL support?
Hi Moz, My site is currently following the convention outlined here: https://support.google.com/webmasters/answer/174992?hl=en Basically since pages are generated via Ajax we are setup to direct bots that replace the #! in a url with ?escaped_fragment to cached versions of the ajax generated content. For example, if the bot sees this url: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 it will replace it will instead access the page: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 In which case my server serves the cached html instead of the live page. This is all per Googles direction and is indexing fine. However the MOZ bot does not do this. It seems like a fairly straight-forward feature to support. Rather than ignoring the hash, you look to see if it is a #! and then try to spider the url replaced with ?escaped_fragment. Our server does the rest. If this is something MOZ plans on supporting in the future I would love to know. If there is other information that would be great. Also, pushstate is not practical for everyone due to limited browser support, etc. Thanks, Dustin Updates: I am editing my question because it won't let me respond to my own question. It says I need to sign up for MOZ analytics. I was signed up for Moz Analytics?! Now I am not? I responded to my invitation weeks ago? Anyway, you are misunderstanding how this process works. There is no site-map involved. The bot reads this URL on the page: http://www.discoverymap.com/#!/California/Map-of-Carmel/73 And when it is ready to spider the page for content it, it spider's this URL instead: http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73 The server does the rest, it is simply telling Roger to recognize the #! format and replace it with ?escaped_fragment Though I obviously do not know how Roger is coded but it is a simple string replacement. Thanks.
Moz Bar | | oneactlife0