How to get rid of the message "Search Engine blocked by robots.txt"
-
During the Crawl Diagnostics of my website,I got a message Search Engine blocked by robots.txt under Most common errors & warnings.Please let me know the procedure by which the SEOmoz PRO Crawler can completely crawl my website?Awaiting your reply at the earliest.
Regards,
Prashakth Kamath
-
Thanks Simon for the info.Will check and revert back if there is any issues.
Regards,
Prashakth Kamath
-
Thanks Ryan for the info.Will check and revert back if there is any issues.
Regards,
Prashakth Kamath
-
Hi Sagar
That was a good reply from Ryan.
Check out http://www.seomoz.org/dp/rogerbot
rogerbot is the name of the SEOmoz crawler bot, the above page has all the info you require.
Regards
Simon
-
The seomoz user agent is named rogerbot. You can read more about the SEOmoz crawl process here: http://seomoz.zendesk.com/entries/20034082-lesson-5-crawl-diagnostics
<code>User-agent: rogerbot Allow: /</code>
-
Thanks Ryan for your immediate reply.
Can you please provide name & the code of the SEOmoz Crawler that I need to enter on my file so that the SEOmoz crawls all the webpages of my website.Apart from SEOmoz Crawler I don't want any other crawler to crawl my website?Please help.Awaiting your reply.
Regards,
Prashakth Kamath
-
That error is pretty straight forward and indicates you have a robots.txt file which is blocking the crawler from accessing your site. The robots.txt file can be read by going to your site URL and adding /robots.txt to it such as www.mysite.com/robots.txt.
The file can be found in the root directory on your site's web server. Remove or alter the file to allow search engines to crawl your site. More info can be found at http://www.robotstxt.org/
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Keyword research tools that provide specific suggestions re: Voice Search?
Hi, I'm wondering which are the best Keyword research tools that provide specific volumes and suggestions re: Voice Search - including on question type searches? Any suggestions would be brilliant - thanks in advance, Luke
Moz Pro | | McTaggart0 -
Does Moz have any tools to see the amount of traffic certain keywords bring us in search? Does anyone know any tools that give the actual traffic numbers?
We're looking for numerical data on the amount of traffic that keywords receive, regardless of their rank in Moz. Thanks!
Moz Pro | | Scratch-Kony0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Metric "Total Links" can somebody explain this metric to me?
Dear colleagues, Who can explain the following to me? Subdomain metrics - Total links The total links is huge compared to the sum of internal and external links. I do not understand this metric. Can somebody help me to explain this the metrich "total links" I have to present these metrics to my customer and do not want to have "don't know" as an answer 😉 Thanks, Alain Nijholt BMC Internet Marketing
Moz Pro | | bmcinternetmarketing0 -
What should be cols value if I want to get Backlinks?
Hi, I am forming below url to get backlinks. http://lsapi.seomoz.com/linkscape/url-metrics/".$trimurl."?Cols=2048 &AccessID=".$accessID." &Expires=".$expires." &Signature=".$urlSafeSignature; For Example, if I keep $trimurl = "www.tatvic.com/" , I get [uid] => 633. Is this a right way to get number of backlinks ? If not, what should be the 'Cols' value? Also, how can I ensure that the number of links I am getting is correct ? Is there any way to compare this number with Google search results? This is very essential to check as I got different number of backlinks on different APIs. Thank you.
Moz Pro | | Ravi_Pathak0 -
Does Rogerbot respect the robots.txt file for wildcards?
Hi All, Our robots.txt file has wildcards in it, which Googlebot recognizes. Can anyone tell me whether or not Rogerbot recognizes wildcards in the robots.txt file? We've done a Rogerbot site crawl since updating the robots.txt file and the pages that are set to disallow using the wildcards are still showing. BTW, Googlebot is not crawling these pages according to Webmaster Tools. Thanks in advance, Robert
Moz Pro | | AC_Pro0 -
"Issue: Duplicate Page Content " in Crawl Diagnostics - but sample pages are not related to page indicated with duplicate content
In the crawl diagnostics for my campaign, the duplicate content warnings have been increasing, but when I look at the sample pages that SEOMoz says have duplicate content, they are completely different pages from the page identified. They have different Titles, Meta Descriptions and HTML content and often are different types of pages, i.e. product page appearing as having duplicate content vs. a category page. Anyone know what could be causing this?
Moz Pro | | EBCeller0 -
90% of our sites that are designed are in wordpress and the report brings up "duplicate" content errors. I presume this is down to a conical error?
We are looking at getting the Agency version of SEOMoz and are based in the UK Could you please tell me what would be the best way to correct this issue as this appears to be a problem with all our clients websites. an example would be www.fsgenergy.co.uk Would you also be able to suggest the best SEO plugin to use with SEOMOz ? Many thanks Paul
Moz Pro | | KloodLtd1