How to block Rogerbot From Crawling UTM URLs
-
I am trying to block roger from crawling some UTM urls we have created, but having no luck. My robots.txt file looks like:
User-agent: rogerbot Disallow: /?utm_source* This does not seem to be working. Any ideas?
-
Shoot! There may be something else going on. Give us a shout at help@moz.com and we'll see if we can figure it out!
-
FYI - I tried this and it did not work. Rogerbot is still picking up URL's we don't need. It's making my crawl report a mess!
-
The only difference there is the * wildchar. The string with that character will limit the crawler from accessing any URL with that string of characters in it.
-
What is the difference between Disallow: /*?utm_ and Disallow: /?utm_ ?
-
Hi there! Tawny from the Customer Support team here!
You should be able to add a disallow directive for that parameter and any others to block our crawler from accessing them. It would look something like this:
User-agent: Rogerbot
Disallow: ?utmetc., until you have blocked all of the parameters that may be causing these duplicate content errors. It looks like the _source* might be what's giving our tools some trouble. It looks like Logan Ray has made an excellent suggestion - give that formatting a try and see if it helps!
You can also use the wild card user-agent * in order to block all crawlers from those pages, if you prefer. Here is a great resource about the robots.txt file that might be helpful: https://moz.com/learn/seo/robotstxt We always recommend checking your robots.txt file with a handy Robots Checker Tool once you make changes to avoid any nasty surprises.
-
Skyler,
You're close, give this a shot:
Disallow: /*?utm_
This will be inclusive of all UTM tags regardless of what comes before the tag or what element you have first.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved 503 Service Unavailable (temporary?) Rogerbot takes a break
A lot of my Moz duties seem to be setting hundreds of issues to ignore because my site was getting crawled while under maintenance. Why can't Rogerbot take a break after running into a few of these and then try again later? Is there an official code for Temporary Service Unavailability that can smart bots pause crawls so that they are not wasting compute, bandwidth, crawl budget and my time?
Product Support | | awilliams_kingston0 -
Unsolved Crawling only the Home of my website
Hello,
Product Support | | Azurius
I don't understand why MOZ crawl only the homepage of our webiste https://www.modelos-de-curriculum.com We add the website correctly, and we asked for crawling all the pages. But the tool find only the homepage. Why? We are testing the tool before to suscribe. But we need to be sure that the tool is working for our website. If you can please help us.0 -
Solved Why is MOZ crawl taking so long?
I began my site crawl on November 3rd and now it is November 7th and it is still "in progress". Why is this happening?
Product Support | | CarisaS_Wenda0 -
Why does Moz see short Russian & Chinese urls as too long
We are translating content into Russian and Chinese on our website, the number of errors are increasing mainly around URL too long, each time we create a page with a Chinese or Russian url. If you click on the link below for a Chinese content page: https://www.westbourneschool.com/zh-hans/%E5%AE%BF%E8%88%8D%E5%8F%8A%E5%AF%84%E5%AE%BF%E5%AE%B6%E5%BA%AD/%E5%AE%BF%E8%88%8D%E7%94%9F%E6%B4%BB You will notice the url displayed by the browser is actually not very long, is there a way for MOZ not to see it as it appears above? Below is a page in Russian https://www.westbourneschool.com/ru/%D0%A8%D0%BA%D0%BE%D0%BB%D0%B0%20%D0%9F%D1%80%D0%BE%D0%B6%D0%B8%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5 Any help will be much appreciated.
Product Support | | mariedetitomount0 -
Crawl error robots.txt
Hello, when trying to access the site crawl to be able to analyze our page, the following error appears: **Moz was unable to crawl your site on Nov 15, 2017. **Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster. Can help us? Thanks!
Product Support | | Mandiram0 -
What is the difference between the "Crawl Issues" report and the "Crawl Test" report?
I've downloaded the CSV of the Crawl Diagnositcs report (which downloads as the "Crawl Issues" report) and the CSV from the Crawl Test Report, and pulled out the pages for a specific subdomain. The Crawl Test report gave me about 150 pages, where the Crawl Issues report gave 500 pages. Why would there be that difference in results? I've checked for duplicate URLs and there are none within the Crawl Issues report.
Product Support | | SBowen-Jive0 -
Crawl errors are still shown after fixed
Fixed long ago "title too long" and some 404 errors, but still keep on showing on error statistics
Product Support | | sws10 -
Crawl Limit Question
I'm a little confused as to how the crawl limit works. Since there seems to be a 10K per week max, the crawl limit can't be per week, so what is the time period? Also, does that include crawling sites entered as competitors? Right now I'm at 14/25 sites and most of them are under 1,000 pages so I'm not sure how I hit that limit (other than a one-time spike of 28,000 in November).
Product Support | | David_Moceri0