4xx (not found) errors seem spurious, caused by a "\" added to the URL
-
Hi SEOmoz folks
We're getting a lot of 404 (not found) errors in our weekly crawl.
However the weird thing is that the URLs in question all have the same issue.
They are all a valid URL with a backsalsh ("") added. In URL encoding, this is an extra %5C at the end of the URL.
Even weirder, we do not have any such URLs in our (Wordpress-based) website.
Any insight on how to get rid of this issue?
Thanks
-
No, Google Webmaster tools do not list an error here.
Its indeed an SEOmoz bug. Ryan, thanks for trying though!
-
My request is for a real link that I can click on and view the page.
In most cases where someone described an issue to me, either a key piece of information was left out or missed. If you cannot share that information, I understand. In the interest of being helpful, I wanted to ask.
It is entirely possible this is a crawler issue, but it is also possible the crawler is functioning perfectly and Google's crawler will produce the same result. That is my concern.
-
Well actualy I did already. The example I gave above is exactly that, only I replaced the real URL with "URL".
In a bit greater detail, the referring page is actually URL1 and this page contains the javascript
item = '
- text';
which produces 404 errors for URL2 in the SEOmoz crawl report.
-
It is entirely possible the issue is with the SEOmoz crawler. I would like to see it improved as well.
I am concerned the root issue may actually be with your site. Would you be willing to share an example of a link which is flagged in your report along with the referring page?
-
Thanks for the tips. After drilling down on the referer, this looks like an SEOmoz bug.
We are using a wordpress plugin called "collapsing archives" which creates LEGAL archive links with a javascript snippet like this:
item = '
- text';
As you can see this is totally legal javascript. But it seems SEOmoz is scanning the javascript without interpretation and picking up the escaped quotation mark ' after the URL and interpreting it as an additional \ at the end of the URL.
Since the plugin is behaving legally and works well - we want to keep using it. What's the chance that SEOmoz will fix the bug?
-
Many people do not realize when you add the backslash character, you change the URL. You can actually present a different web page for the URL with the trailing slash.
A popular cause of the problem is linking. If you check your weekly crawl report, there will be a column called Referrer. That is the source of the link. Check the referring page and find the link. Fix the link (i.e. remove the trailing slash) and the problem will go away on the next crawl. Of course, you want to determine how the link appeared and ensure it doesn't happen again.
-
If I had to have a guess I'd look into any javascript on the page that is perhaps adding or pointing to the URL with backslash.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can using url builder for campaign tracking impact link equity?
We have used the URL builder tools for building custom links that are placed on our referrer websites mainly for campaign tracking in Google Analytics, but when you use a shortened link on another website how does that impact the the link juice or equity? Is there any negative impact on the link rankings? Or should you provide the specific landing page url to the company that will be posting a link to your site?
Moz Pro | | CSobus0 -
Getting a URL Unaccessible on the page grader
I'm optimizing a site for a financial advisor, here is the site: http://www.mattkeenancfp.com I am getting the message "that URL is unaccessible" when I try to use the on-page grader. This is an emerald website too, I'm not sure if that has any effect on anything though.
Moz Pro | | ryanbilak0 -
Getting Redirect Loop and Oauth Error When Adding Facebook Page
Hi all, I keep getting the following error when trying to add my Facebook page. It worked fine in the past and has suddenly stopped working: The webpage at https://graph.facebook.com/oauth/authorize?client_id=142287725855094&redirect_uri=http%3A%2F%2Fpro.seomoz.org%2Fcampaigns%2F173488%2Fsocial%2Fcreate%2Ffacebook%2F127833296954.html&scope=read_stream%2Cuser_videos%2Cuser_photos%2Cuser_photo_video_tags%2Cmanage_pages%2Cread_insights has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer. I've tried clearing cache and deleting cookies. Any other ideas I would try? Thanks!
Moz Pro | | kenc1380 -
Truncate page URLs
We have some pages (for example a contact us form) for which the URL is modified by the CMS depending on the referring page (this helps to put the form submission in context for the sales reps who get the contact submission). The SEOmoz crawler considers each URL a new page -- and so numbers like in diagnostics are all inflated as the same page is listed multiple times (e.g. for too many links) Is there a setting to change what the crawler considers to be the same page? Here are two URLs for the same page that the reports treat as separate pages: http://www.spirent.com/About-Us/Contact_us.aspx?referurl=0F528F4D703D8BB3523738D6373AA8AD http://www.spirent.com/About-Us/Contact_us.aspx?referurl=10ACDA6055244E369395223437FDCF30 The page is actually: http://www.spirent.com/About-Us/Contact_us.aspx Thanks Ken
Moz Pro | | spirent.marcom0 -
Crawl Diagnostics shows two title and meta tag errors but they are false positives.
I got one hit each on "Missing Meta Description Tag" and "Title Missing or Empty" but in the source of my page they are clearly there: <title>Protein Powder | Compare and Get the Best Prices</title> <meta name="keywords" content="protein powder, whey protein, protein supplement, whey protein isolate, hydrolyzed whey" /> I understand there are conventions which may or may not be followed by Drupal (I read an earlier question where ordering and W3C conventions were suggested) but i'm not sure how to fix them given Drupal will just overwrite any hand editing the next time something is built and importantly, I can't get the crawl to work on cue - it works on the automatic once a week crawl in the main campaign summary but every time I've specifically used the Crawl Test tool it gives me a "There was an error submitting your request to the crawler. Please try again later" so I can't really test any changes. Given Google seems to be recognising the title tag - ie showing it in the results - Do I put this down as seomoz just not working? Kind Regards, Brian
Moz Pro | | btrr690 -
Why do pages with canonical urls show in my report as a "Duplicate Page Title"?
eg: Page One
Moz Pro | | DPSSeomonkey
<title>Page one</title>
No canonical url Page Two
<title>Page one</title> Page two is counted as being a page with a duplicate page title.
Shouldn't it be excluded?0 -
How do I get the Page Authority of individual URLs in my exported (CSV) crawl reports?
I need to prioritize fixes somehow. It seems the best way to do this would be to filter my exported crawl report by the Page Authority of each URL with an error/issue. However, Page Authority doesn't seem to be included in the crawl report's CSV file. Am I missing something?
Moz Pro | | Twilio0 -
Are there any good "geo location" seo tools.
Does anyone know of a good seo tool ( either in seomoz, or another website ) that will allow me to search google from different geo locations. For example, if I wanted to see how my company ranks for the term "computer repair" in redding california and sacramento california.
Moz Pro | | NerdsOnCall0