Crawl diagnostic issue?
-
I'am sorry if my English isn't very good, but this is my problem at the moment:
On two of my campagnes I get a weird error on Moz Analytics:
605 Page Banned by robots.txt, X-Robots-Tag HTTP Header, or Meta Robots Tag
Moz Analytics points to an url that starts with: http:/**/None/**www.????.com. We don't understand how Moz indexed this non-existing page that starts with None? And how can we solve this error?
I hope that someone can help me.
-
Hi MOZ,
I'am sorry that I have not previously responded. The problem has been solved. Thanks!
Also thanks to Pixel for the response!
Greetz,
Sam
-
Hi Nettt!
I apologize for any confusion and can confirm there is no issue on your side. One of our crawlers failed causing some campaigns crawled on Aug 29th attempt to follow the strange /None/ URL you are seeing in your diagnostics. I've submitted a re-crawl for all of your campaigns affected so you should see updated data by this Friday.
Hope this helps!
-
"I have checked the URL, and it is not our own website that has the error."
is this the problem?
Could you take a screen grab of the problem it might help better.
-
Thanks for the respons, Pixelbypixel!
I have checked the URL, and it is not our own website that has the error.
We have checked the robots.txt and it should not cause any problem. We have n't recently changed it.
I Think that Moz is causing it, but I am not sure..
-
Is the URL correct on Moz pro? It also seems like your robots.txt is blocking Moz which you may want to look into.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Weird Indexing Issues with the Pages and Rankings
When I found the my page was non-existent on the search results page, I requested Google to index my page via the Search Console. And then just a few minutes after I did that, that page rose to top 3 ranking on the search page (with the same keyword and browser search). It happens to most of the pages on my website. Maybe a week later the rankings sank again, and I had to do the process again to make my pages to the top. Any reasons to explain this phenomenon, and how I can fix this issue? Thank you in advance.
Intermediate & Advanced SEO | | mrmrsteven0 -
Hreflang implementation issue
We are currently handling search for a global brand www.example.com which has presence in many countries worldwide. To help Google understand that there is an alternate version of the website available in another language, we have used “hreflang” tags. Also, there is a mother website (www.example.com/global) which is given the attribution of “x-default” in the “hreflang” tag. For Malaysia as a geolocation, the mother website is ranking instead of the local website (www.example.com/my) for majority of the products. The code used for “hreflang” tag execution, on a product page, being: These “hreflang” tags are also present in the XML sitemap of the website, mentioning them below: <loc>http://www.example.com/my/product_name</loc> <lastmod>2017-06-20</lastmod> Is this implementation of “hreflang” tags fine? As this implementation is true across all geo-locations, but the mother website is out-ranking me only in the Malaysia market. If the implementation is correct, what could be other reasons for the same ranking issue, as all other SEO elements have been thoroughly verified and they seem fine.
Intermediate & Advanced SEO | | Starcom_Search0 -
Product search URLs with parameters and pagination issues - how should I deal with them?
Hello Mozzers - I am looking at a site that deals with URLs that generate parameters (sadly unavoidable in the case of this website, with the resource they have available - none for redevelopment) - they deal with the URLs that include parameters with *robots.txt - e.g. Disallow: /red-wines/? ** Beyond that, they userel=canonical on every PAGINATED parameter page[such as https://wine****.com/red-wines/?region=rhone&minprice=10&pIndex=2] in search results.** I have never used this method on paginated "product results" pages - Surely this is the incorrect use of canonical because these parameter pages are not simply duplicates of the main /red-wines/ page? - perhaps they are using it in case the robots.txt directive isn't followed, as sometimes it isn't - to guard against the indexing of some of the parameter pages??? I note that Rand Fishkin has commented: "“a rel=canonical directive on paginated results pointing back to the top page in an attempt to flow link juice to that URL, because “you'll either misdirect the engines into thinking you have only a single page of results or convince them that your directives aren't worth following (as they find clearly unique content on those pages).” **- yet I see this time again on ecommerce sites, on paginated result - any idea why? ** Now the way I'd deal with this is: Meta robots tags on the parameter pages I don't want indexing (nofollow, noindex - this is not duplicate content so I would nofollow but perhaps I should follow?)
Intermediate & Advanced SEO | | McTaggart
Use rel="next" and rel="prev" links on paginated pages - that should be enough. Look forward to feedback and thanks in advance, Luke0 -
Wordpress sidebar dropdown url ?cat...number issue
Hi does anyone know how to fix wordpress sidebar dropdown url ?cat...number found this below, but still not sure the best way to fix https://wordpress.org/support/topic/how-to-display-category-name-in-url-not-cat-number https://wordpress.org/support/topic/category-drop-down-menu-shows-wrong-permalink-structure-1 any ideas?
Intermediate & Advanced SEO | | Taiger0 -
Rankings disappeared on main 2 keywords - are links the issue?
Hi, I asked a question around 6 months ago about our rankings steadily declining since April of 2013. I did originally reply to that topic a few days ago, but as it's so old I don't think it's been noticed. I'm posting again here, if that's an issue I'm happy to delete. Here it is for reference: http://moz.com/community/q/site-rankings-steadily-decreasing-do-i-need-to-remove-links Since the original post, I have done nothing linkbuilding-wise except posting blog posts and sharing them on Facebook, G+ and Twitter. There are some links in there which don't look great (ie spammy seo directories, which I'm sending removal requests to) although quite a lot of others are relevant. Here's my link profile: <a rel="nofollow" target="_blank">http://www.opensiteexplorer.org/links?site=www.thomassmithfasteners.com</a> I've tried to make the site more accessible - we now have a simple, responsive design and I've tried to make the content clear and concise. In short, written for humans rather than search engines. As of the end of November, 'nuts and bolts' has now disappeared completely, and 'bolts and nuts' is page 8. There are many pages much higher which are not as relevant and have no links. We still rank highly for more specialised terms - ie 'bsw bolts' and 'imperial bolts' are still page 1, but not as high as before. We get an 'A' grade on the on-page grader for 'nuts and bolts, and most above us get F. I was cautious about removing links as our profile doesn't seem too bad but it does seem as if it's that. There are a fair few questionable directories in there, no doubt about that, but our overall practice in recent years has been natural building and link earning. So - I've created a spreadsheet and identified the bad links - ie directories with any SEO connotations. I am about to submit removal requests, I thought two polite requests a couple of weeks apart prior to disavowing with Google. But am I safe to disavow straight away? I say this as I don't think I'll get too many responses from those directories. I am also gradually beefing up the content on the shop pages in case of any 'thin content' issues after advice on the previous post. I noticed 100s of broken links in webmaster tools last week due to 2 broken links on our blog that repeated on every page and have fixed those. I have also been fixing errors W3C compliance-wise. Am I right to do all this? Can anyone offer any suggestions? I'm still not 100% sure if this is Panda, Penguin or something else. My guess is Penguin, but the decline started in March 2013, which correlates with Panda. Best Regards and thanks for any help, Stephen
Intermediate & Advanced SEO | | stephenshone0 -
Issues with Sub domains for dealers
I'm starting a new SEO project and am feeling a little overwhelmed due to the scale of it. I am not sure where to start and hope that someone has some ideas. Thousands of dealer websites reside as sub domains on gravelymower.com/ (e.g. http://quality-mowers.gravelymower.com/) The particular sub domain mentioned above is not showing up at all for any searches and is not cached by Google: http://webcache.googleusercontent.com/search?q=cache:http://quality-mowers.gravelymower.com/ I realize that pretty much zero SEO best practices are followed on page and the location is not on the page, but why is this sub domain not even being indexed by Google? Any help is appreciated. Thanks!
Intermediate & Advanced SEO | | BridgelineDigital880 -
Googlebot crawling partial URLs
Hi guys, I've checked my email this morning and I've got a number of 404 errors over the weekend where Google has tried to crawl some of my existing pages but not found the full URL. Instead of hitting 'domain.com/folder/complete-pagename.php' it's hit 'domain.com/folder/comp'. This is definitely Googlebot/2.1; http://www.google.com/bot.html (66.249.72.53) but I can't find where it would have found only the partial URL. It certainly wasn't on the domain it's crawling and I can't find any links from external sites pointing to us with the incorrect URL. GoogleBot is doing the same thing across a single domain but in different sub-folders. Having checked Webmaster Tools there aren't any hard 404s and the soft ones aren't related and haven't occured since August. I'm really confused as to how this is happening.. Thanks!
Intermediate & Advanced SEO | | panini0 -
In-House SEO - Doubt about one SEO issue - Plz guys help over here =)
Hello, We wanna promote some of our software's. I will give u guys one example bellow: http://www.mediavideoconverter.de/pdf-to-epub-converter.html We also have this domain: http://pdftoepub.de/ How can we deal about the duplicate content, and also how can we improve the first domain product page. If I use the canonical and don't index the second domain and make a link to the first domain it will help anyway? or don't make any difference? keyword: pdf to epub , pdf to epub converter What u guys think about this technique ? Good / Bad ? Is there the second domain giving any value to the first domain page? Thanks in advance.
Intermediate & Advanced SEO | | augustos0