Blocking all robots except rogerbot
-
I'm in the process of working with a site under development and wish to run the SEOmoz crawl test before we launch it publicly. Unfortunately rogerbot is reluctant to crawl the site. I've set my robots.txt to disallow all bots besides rogerbot.
Currently looks like this:
User-agent: * Disallow: / User-agent: rogerbot Disallow:
All pages within the site are meta tagged index,follow.
Crawl report says:
Search Engine blocked by robots.txt Yes
Am I missing something here?
-
...actually I take that back. Still reporting as blocked by robots.txt.
Going to email the team.
-
Thanks, it appears to be crawling without issue now
-
And if that still doesn't work, email help@seomoz.org and they'll help you figure out the right way to let Roger in while excluding everyone else.
-
You've made it upside down
Roger sees the first * and then goes "okay :(" and goes away.
Simply change it to:
User-agent: rogerbot
Disallow:User-agent: *
Disallow: /
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Rogerbot blocked by cloudflare and not display full user agent string.
Hi, We're trying to get MOZ to crawl our site, but when we Create Your Campaign we get the error:
Moz Pro | | BB_NPG
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct. If the issue persists, check out this article for further help. robot.txt is fine and we actually see cloudflare is blocking it with block fight mode. We've added in some rules to allow rogerbot but these seem to be getting ignored. If we use a robot.txt test tool (https://technicalseo.com/tools/robots-txt/) with rogerbot as the user agent this get through fine and we can see our rule has allowed it. When viewing the cloudflare activity log (attached) it seems the Create Your Campaign is trying to crawl the site with the user agent as simply set as rogerbot 1.2 but the robot.txt testing tool uses the full user agent string rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, rogerbot-crawler+shiny@moz.com) albeit it's version 1.0. So seems as if cloudflare doesn't like the simple user agent. So is it correct the when MOZ is trying to crawl the site it uses the simple string of just rogerbot 1.2 now ? Thanks
Ben Cloudflare activity log, showing differences in user agent strings
2022-07-01_13-05-59.png0 -
Restrict rogerbot for few days
Hi Team, I have a subdomain that built in Zendesk's CRM system. Now, I want to restrict Moz crawler (rogerbot) for crawling this complete subdomain for a few days, but I am not able to edit the robots.txt file of the subdomain, because this is a shared file and Zendesk is not allowing to edit it. Could you please let me know the alternative way to restrict rogerbot to crawl this subdomain? I am eagerly awaiting your quick response. Thanks
Moz Pro | | Adeptia0 -
Will moz crawl pages blocked by robots.txt and nofollow links?
i have over 2,000 temporary redirects in my campaign report redirects are mostly events like being redirected to a login page before showing the actual data im thinking of adding nofollow on the link so moz wont crawl the redirection to reduce the notification will this solve my problem?
Moz Pro | | WizardOfMoz0 -
Moz campaign works around my robots.txt settings
My robots.txt file looks like this: User-agent: * Disallow: /*? Disallow: /search So, it should block (deindex) all dynamic URLs. If I check this url in Google: site:http://www.webdesign.org/search/page-1.html?author=47 Google tells me: A description for this result is not available because of this site's robots.txt – learn more. So far so good. Now, I ran a Moz SEO campaign and I got a bunch of duplicate page content errors. One of the links is this one: http://www.webdesign.org/search/page-1.html?author=47 (the same I tested in Google and it told me that the page is blocked by robots.txt which I want) So, it makes me think that Moz campaigns check files regardless of what robots.txt say? It’s my understanding User-agent: * should forbid Rogerbot from crawling as well. Am I missing something?
Moz Pro | | VinceWicks0 -
Its been over a month, rogerbot hasn't crawled the entire website yet. Any ideas?
Rogerbot has stopped crawling the website at 308 pages past week and has not crawled the website with over 1000+ pages. Any ideas on what I can do to get this fixed & crawling?
Moz Pro | | TejaswiNaidu0 -
Does SeoMoz realize about duplicated url blocked in robot.txt?
Hi there: Just a newby question... I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there. They are intended to be blocked by the web robot.txt file. Here is an example url (joomla + virtuemart structure): http://www.domain.com/component/users/?view=registration and the here is the blocking content in the robots.txt file User-agent: * _ Disallow: /components/_ Question is: Will this kind of duplicated url errors be removed from the error list automatically in the future? Should I remember what errors should not really be in the error list? What is the best way to handle this kind of errors? Thanks and best regards Franky
Moz Pro | | Viada0 -
Does the SEOMoz weekly crawl that highlights no meta description tag, take into account if there is a meta robots noindex,follow tag on the pages it indicates the missing meta descriptions?
The weekly crawl website report is telling me that there are pages that have missing meta description tags, yet I've implemented meta robots tags to 'noindex, follow' those pages which are visible in those page source files. As far as Google Is concerned, surely this then won't be a problem since it is being instructed NOT to consider these specific pages for indexing. I am assuming that the weekly SEOmoz website crawl is simply throwing the missing meta description crawl findings into its report without itself observing that the particluar URL references contain the meta robots 'noindex,follow' tag ???? Appreciate if you can clairfy if this is the case. It would help me understand that (at least in terms of my efforts towards Google) your own crawl doesn't observe the meta robots tag instruction, hence the resultant report's flagging the discrepancy.
Moz Pro | | callassist0 -
Link Blocks
Sorry, perhaps a noob question. In relation to site explorer, have also searched and unable to find any information, wondered if anyone could advise as to what "Linking C Blocks" are? Found under the "Compare Link Metrics" tab. Thanks in advance. Lee
Moz Pro | | LeeMiller0