Rogerbot directives in robots.txt
-
I feel like I spend a lot of time setting false positives in my reports to ignore.
Can I prevent Rogerbot from crawling pages I don't care about with robots.txt directives? For example., I have some page types with meta noindex and it reports these to me. Theoretically, I can block Rogerbot from these with a robots,txt directive and not have to deal with false positives.
-
Yes, you can definitely use the robots.txt file to prevent Rogerbot from crawling pages that you don’t want to include in your reports. This approach can help you manage and minimize false positives effectively.
To block specific pages or directories from being crawled, you would add directives to your robots.txt file. For example, if you have certain page types that you’ve already set with meta noindex, you can specify rules like this:
User-agent: Rogerbot Disallow: /path-to-unwanted-page/ Disallow: /another-unwanted-directory/
This tells Rogerbot not to crawl the specified paths, which should reduce the number of irrelevant entries in your reports.
However, keep in mind that while robots.txt directives can prevent crawling, they do not guarantee that these pages won't show up in search results if they are linked from other sites or indexed by different bots.
Additionally, using meta noindex tags is still a good practice for pages that may occasionally be crawled but shouldn’t appear in search results. Combining both methods—robots.txt for crawling and noindex for indexing—provides a robust solution to manage your web presence more effectively.
-
Never mind, I found this. https://moz.com/help/moz-procedures/crawlers/rogerbot
-
@awilliams_kingston
Yes, you can use robots.txt directives to prevent Rogerbot from crawling certain pages or sections of your site, which can help reduce the number of false positives in your reports. By doing so, you can focus Rogerbot’s attention on the parts of your site that matter more to you and avoid reporting issues on pages you don't care about.Here’s a basic outline of how you can use robots.txt to block Rogerbot:
Locate or Create Your robots.txt File: This file should be placed in the root directory of your website (e.g., https://www.yourwebsite.com/robots.txt).
Add Directives to Block Rogerbot: You’ll need to specify the user-agent for Rogerbot and define which pages or directories to block. The User-agent directive specifies which web crawlers the rules apply to, and Disallow directives specify the URLs or directories to block.
Here’s an example of what your robots.txt file might look like if you want to block Rogerbot from crawling certain pages:
javascript
Disallow: /path-to-block/
Disallow: /another-path/
If you want to block Rogerbot from accessing pages with certain parameters or patterns, you can use wildcards:javascript
Disallow: /path-to-block/*
Disallow: /another-path/?parameter=
Verify the Changes: After updating the robots.txt file, you can use tools like Google Search Console or other site analysis tools to check if the directives are being applied as expected.Monitor and Adjust: Keep an eye on your reports and site performance to ensure that blocking these pages is achieving the desired effect without inadvertently blocking important pages.
By doing this, you should be able to reduce the number of irrelevant or false positive issues reported by Rogerbot and make your reporting more focused and useful.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved 503 Service Unavailable (temporary?) Rogerbot takes a break
A lot of my Moz duties seem to be setting hundreds of issues to ignore because my site was getting crawled while under maintenance. Why can't Rogerbot take a break after running into a few of these and then try again later? Is there an official code for Temporary Service Unavailability that can smart bots pause crawls so that they are not wasting compute, bandwidth, crawl budget and my time?
Product Support | | awilliams_kingston0 -
Small startup Marketing leader - What are 3 actionable reports I can review daily
Hi all - I've joined a small startup as their first marketing hire and I am strategizing, planning, and executing all work. I need to get to 3-4 reports I focus on per channel so I can still be relatively effective across multiple channels. What are 3-4 reports I should be laser-focused on in Moz that will help me ID opportunities/threats and be able to identify best actions from.
Digital Marketing | | AndrewAeqium0 -
520 Error from crawl report with Cloudflare
I am getting a lot of 520 Server Error in crawl reports. I see this is related to Cloudflare. We know 520 is Cloudflare so maybe the Moz team can change this from "unknown" to "Cloudflare 520". Perhaps the Moz team can update the "how to fix" section in the reporting, if they have some possible suggestions on how to avoid seeing these in the report of if there is a real issue that needs to be addressed. At this point I don't know. There must be a solution that Moz can provide like a setting in Cloudflare that will permit the Rogerbot if Cloudflare is blocking it because it does not like its behavior or something. It could be that Rogerbot is crawling my site on a bad day or at a time when we were deploying a massive site change. If I know when my site will be down can I pause Rogerbot? I found this https://developers.cloudflare.com/support/troubleshooting/general-troubleshooting/troubleshooting-crawl-errors/
Technical SEO | | awilliams_kingston0 -
Unsolved Google Analytics (GA4) recommendations for SEO analysis?
Guides on Moz and elsewhere mostly refer to Google Analytics' Universal Analytics (UA). However, UA is being replaced with GA4, and the interface, options, and reporting are very different. Can you recommend a clear, thorough, and effective walkthrough of how to set up useful SEO reports in GA4? Is there a simple tool you recommend that will help connect historical data from UA to GA4 when GA4 is the only option available? If there's no simple tool, what values do you recommend retaining from UA for effective historical reporting? How would you use them? At minimum for reporting, I'd want to show month-to-month changes and year-to-year changes (in percentages and in real numbers) for the following: all site visits all organic visits organic visits as a percentage of all site visits organic visits that led to a specific goal completion organic visits that led to any goal completion Thanks in advance for your help!
Reporting & Analytics | | Kevin_P1 -
Abnormally High Direct Traffic Volume
We have abnormally high amounts of direct traffic to our site. It's comprising over half of all web traffic while organic is second with considerably less. From there the volume decreases amongst other channels. I've never seen such a huge proportion of traffic being attributed the Direct. Does anyone know how to test this or see if there is an error in Google Analytics reporting?
Reporting & Analytics | | graceflack 01 -
I have had a huge increase in direct traffic to our website but not sure why this suddenly happened? (no promos during this time period)
I have had a huge increase in direct traffic to our website but not sure why this suddenly happened? (no promos during this time period), traffic up 200%+ according to Google Analytics
Reporting & Analytics | | Julia_a1a1 -
Google Analytics shows most referrers as "Direct" -- What are some better tools?
Very often Google Analytics will show 50-90% of our referrers as (direct) which is not very helpful. Are there other tools out there that will provide a clearer breakdown of what other websites are sending us our traffic? Specifically, I want to be able to be able to tell who are the top traffic referrers to my top performing pages on my site for the last 30 days. (I want to be able to study this on a per-page basis.) Thanks in advance!
Reporting & Analytics | | Brand_Psychic0 -
Ideas for a strange surge in direct traffic
Being the type of person that can't stop checking my Google Analytics, I noticed this morning that between the hours of 12 and 2 central time last night I recieved a strange surge of direct traffic. My site typically gets around 40 direct visits per day (most of them coming during peak hours around the time people are getting off of work). I received 150 direct visits during this random time in the middle of the night. My bounce rate soared as almost every visit was a bounce. The visitors locations are spread out as if it is natural human traffic. Every single one of the visitors is using a chrome browser. Has anyone else run into something like this? All I can think of is that someone might have an addon or toolbar for chrome that linked to my site for a while in a way that caused unsuspecting visitors to end up on my page. For now I'm keeping my fingers crossed and hoping the traffic doesn't return, as it could be bad news for my Adsense. *Edit: Also of particular interest, each direct visit went to an internal page on my site and no two of the 150 visits went to the same internal page. I also added an image showing a normal complete day's direct traffic and my direct traffic for today so far (The bulk of the surge came yesterday but the shot from today illustrates the surge better because it is missing my naturally daily direct traffic that comes in the afternoon) heXFb#LiiEr
Reporting & Analytics | | pattersonla0