Crawl diagnostic issue?
-
I'am sorry if my English isn't very good, but this is my problem at the moment:
On two of my campagnes I get a weird error on Moz Analytics:
605 Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag
Moz Analytics points to an url that starts with: http:/**/None/**www.????.com. We don't understand how Moz indexed this non-existing page that starts with None? And how can we solve this error?
I hope that someone can help me.
-
Hi MOZ,
I'am sorry that I have not previously responded. The problem has been solved. Thanks!
Also thanks to Pixel for the response!
Greetz,
Sam
-
Hi Nettt!
I apologize for any confusion and can confirm there is no issue on your side. One of our crawlers failed causing some campaigns crawled on Aug 29th attempt to follow the strange /None/ URL you are seeing in your diagnostics. I've submitted a re-crawl for all of your campaigns affected so you should see updated data by this Friday.
Hope this helps!
-
"I have checked the URL, and it is not our own website that has the error."
is this the problem?
Could you take a screen grab of the problem it might help better.
-
Thanks for the respons, Pixelbypixel!
I have checked the URL, and it is not our own website that has the error.
We have checked the robots.txt and it should not cause any problem. We have n't recently changed it.
I Think that Moz is causing it, but I am not sure..
-
Is the URL correct on Moz pro? It also seems like your robots.txt is blocking Moz which you may want to look into.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
302 > 302 > 301 Redirect Chain Issue & Advice
Hi everyone, I recently relaunched our website and everything went well. However, while checking site health, I found a new redirect chain issue (302 > 302 > 301 > 200) when the user requests the HTTP and non-www version of our URL. Here's what's happening: • 302 #1 -- http://domain.com/example/ 302 redirects to http://domain.com/PnVKV/example/ (the 5 characters in the appended "subfolder" are dynamic and change each time)
Intermediate & Advanced SEO | | Andrew_In_Search_of_Answers
• 302 #2 -- http://domain.com/PnVKV/example/ 302 redirects BACK to http://domain.com/example/
• 301 #1 -- http://domain.com/example/ 301 redirects to https://www.domain.com/example/ (as it should have done originally)
• 200 -- https://www.domain.com/example/ resolves properly We're hosted on AWS, and one of my cloud architects investigated and reported GoDaddy was causing the two 302s. That's backed up online by posts like https://stackoverflow.com/questions/46307518/random-5-alpha-character-path-appended-to-requests and https://www.godaddy.com/community/Managing-Domains/My-domain-name-not-resolving-correctly-6-random-characters-are/td-p/60782. I reached out to GoDaddy today, expecting them to say it wasn't a problem on their end, but they actually confirmed this was a known bug (as of September 2017) but there is no timeline for a fix. I asked the first rep I spoke with on the phone to send a summary, and here's what he provided in his own words: From the information gathered on my end and I was able to get from our advanced tech support team, the redirect issue is in a bug report and many examples have been logged with the help of customers, but no log will be made in this case due to the destination URL being met. Most issues being logged are site not resolving properly or resolving errors. I realize the redirect can cause SEO issues with the additional redirects occurring. Also no ETA has been logged for the issue being reported. I do feel for you since I now understand more the SEO issues it can cause. I myself will keep an eye out for the bug report and see if any progress is being made any info outside of this I will email you directly. Thanks. Issue being Experienced: Domains that are set to Go Daddy forwarding IPs may sometimes resolve to a url that has extra characters appended to the end of them. Example: domain1.com forwards to http://www.domain2.com/TLYEZ. However it should just forward to http://www.domain2.com. I think this answers what some Moz users may have been experiencing sporadically, especially this previous thread: https://moz.com/community/q/forwarded-vanity-domains-suddenly-resolving-to-404-with-appended-url-s-ending-in-random-5-characters. My question: Given everything stated above and what we know about the impact of redirect chains on SEO, how severe should I rate this? I told my Director that I would recommend we move away from GoDaddy (something I don't want to do, but feel we _**have **_to do), but she viewed it as just another technical SEO issue and one that didn't necessarily need to be prioritized over others related to the relaunch. How would you respond in my shoes? On a scale of 1 to 10 (10 being the biggest), how big of a technical SEO is this? Would you make it a priority? At the very least, I thought the Moz community would benefit from the GoDaddy confirmation of this issue and knowing about the lack of an ETA on a fix. Thanks!0 -
We 410'ed URLs to decrease URLs submitted and increase crawl rate, but dynamically generated sub URLs from pagination are showing as 404s. Should we 410 these sub URLs?
Hi everyone! We recently 410'ed some URLs to decrease the URLs submitted and hopefully increase our crawl rate. We had some dynamically generated sub-URLs for pagination that are shown as 404s in google. These sub-URLs were canonical to the main URLs and not included in our sitemap. Ex: We assumed that if we 410'ed example.com/url, then the dynamically generated example.com/url/page1 would also 410, but instead it 404’ed. Does it make sense to go through and 410 these dynamically generated sub-URLs or is it not worth it? Thanks in advice for your help! Jeff
Intermediate & Advanced SEO | | jeffchen0 -
How to resolve duplicate content issues when using Geo-targeted Subfolders to seperate US and CAN
A client of mine is about to launch into the USA market (currently only operating in Canada) and they are trying to find the best way to geo-target. We recommended they go with the geo-targeted subfolder approach (___.com and ___.com/ca). I'm looking for any ways to assist in not getting these pages flagged for duplicate content. Your help is greatly appreciated. Thanks!
Intermediate & Advanced SEO | | jyoung2220 -
Rankings disappeared on main 2 keywords - are links the issue?
Hi, I asked a question around 6 months ago about our rankings steadily declining since April of 2013. I did originally reply to that topic a few days ago, but as it's so old I don't think it's been noticed. I'm posting again here, if that's an issue I'm happy to delete. Here it is for reference: http://moz.com/community/q/site-rankings-steadily-decreasing-do-i-need-to-remove-links Since the original post, I have done nothing linkbuilding-wise except posting blog posts and sharing them on Facebook, G+ and Twitter. There are some links in there which don't look great (ie spammy seo directories, which I'm sending removal requests to) although quite a lot of others are relevant. Here's my link profile: <a rel="nofollow" target="_blank">http://www.opensiteexplorer.org/links?site=www.thomassmithfasteners.com</a> I've tried to make the site more accessible - we now have a simple, responsive design and I've tried to make the content clear and concise. In short, written for humans rather than search engines. As of the end of November, 'nuts and bolts' has now disappeared completely, and 'bolts and nuts' is page 8. There are many pages much higher which are not as relevant and have no links. We still rank highly for more specialised terms - ie 'bsw bolts' and 'imperial bolts' are still page 1, but not as high as before. We get an 'A' grade on the on-page grader for 'nuts and bolts, and most above us get F. I was cautious about removing links as our profile doesn't seem too bad but it does seem as if it's that. There are a fair few questionable directories in there, no doubt about that, but our overall practice in recent years has been natural building and link earning. So - I've created a spreadsheet and identified the bad links - ie directories with any SEO connotations. I am about to submit removal requests, I thought two polite requests a couple of weeks apart prior to disavowing with Google. But am I safe to disavow straight away? I say this as I don't think I'll get too many responses from those directories. I am also gradually beefing up the content on the shop pages in case of any 'thin content' issues after advice on the previous post. I noticed 100s of broken links in webmaster tools last week due to 2 broken links on our blog that repeated on every page and have fixed those. I have also been fixing errors W3C compliance-wise. Am I right to do all this? Can anyone offer any suggestions? I'm still not 100% sure if this is Panda, Penguin or something else. My guess is Penguin, but the decline started in March 2013, which correlates with Panda. Best Regards and thanks for any help, Stephen
Intermediate & Advanced SEO | | stephenshone0 -
Is Sitemap Issue Causing Duplicate Content & Unindexed Pages on Google?
On July 10th my site was migrated from Drupal to Google. The site contains approximately 400 pages. 301 permanent redirects were used. The site contains maybe 50 pages of new content. Many of the new pages have not been indexed and many pages show as duplicate content. Is it possible that there is a site map issue that is causing this problem? My developer believes the map is formatted correctly, but I am not convinced. The sitemap address is http://www.nyc-officespace-leader.com/page-sitemap.xml [^] I am completely non technical so if anyone could take a brief look I would appreciate it immensely. Thanks,
Intermediate & Advanced SEO | | Kingalan1
Alan | |0 -
After Receiving a "Googlebot can't access your site" would this stop your site from being crawled?
Hi Everyone,
Intermediate & Advanced SEO | | AMA-DataSet
A few weeks ago now I received a "Googlebot can't access your site..... connection failure rate is 7.8%" message from the webmaster tools, I have since fixed the majority of these issues but iv noticed that all page except the main home page now have a page rank of N/A while the home page has a page rank of 5 still. Has this connectivity issues reduced the page ranks to N/A? or is it something else I'm missing? Thanks in advance.0 -
SEOMOZ crawler is still crawling a subdomain despite disallow
This is for our client with a subdomain. We only want to analyze their main website as this is the one we want to SEO. The subdomain is not optimized so we know it's bound to have lots of errors. We added the disallow code when we started and it was working fine. We only saw the errors for the main domain and we were able to fix them. However, just a month ago, the errors and warnings spiked up and the errors we saw were for the subdomain. As far as our web guys are concerned. the disallow code is still there and was not touched. User-agent: rogerbot Disallow: / We would like to know if there's anything we might have unintentionally changed or something we need to do so that the SEOMOZ crawler will stop going through the subdomain. Any help is greatly appreciated!
Intermediate & Advanced SEO | | TheNorthernOffice790 -
Bing flags multiple H1's as an issue of high importance--any case studies?
Going through Bing's SEO Analyzer and found that Bing thinks having multiple H1's on a page is an issue. It's going to be quite a bit of work to remove the H1 tags from various pages. Do you think this is a major issue or not? Does anyone know of any case studies / interviews to show that fixing this will lead to improvement?
Intermediate & Advanced SEO | | nicole.healthline0