How to block Rogerbot From Crawling UTM URLs
-
I am trying to block roger from crawling some UTM urls we have created, but having no luck. My robots.txt file looks like:
User-agent: rogerbot Disallow: /?utm_source* This does not seem to be working. Any ideas?
-
Shoot! There may be something else going on. Give us a shout at help@moz.com and we'll see if we can figure it out!
-
FYI - I tried this and it did not work. Rogerbot is still picking up URL's we don't need. It's making my crawl report a mess!
-
The only difference there is the * wildchar. The string with that character will limit the crawler from accessing any URL with that string of characters in it.
-
What is the difference between Disallow: /*?utm_ and Disallow: /?utm_ ?
-
Hi there! Tawny from the Customer Support team here!
You should be able to add a disallow directive for that parameter and any others to block our crawler from accessing them. It would look something like this:
User-agent: Rogerbot
Disallow: ?utmetc., until you have blocked all of the parameters that may be causing these duplicate content errors. It looks like the _source* might be what's giving our tools some trouble. It looks like Logan Ray has made an excellent suggestion - give that formatting a try and see if it helps!
You can also use the wild card user-agent * in order to block all crawlers from those pages, if you prefer. Here is a great resource about the robots.txt file that might be helpful: https://moz.com/learn/seo/robotstxt We always recommend checking your robots.txt file with a handy Robots Checker Tool once you make changes to avoid any nasty surprises.
-
Skyler,
You're close, give this a shot:
Disallow: /*?utm_
This will be inclusive of all UTM tags regardless of what comes before the tag or what element you have first.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Solved Site Crawl Won't Complete
How can I start/restart a new site crawl? I requested one 2 days ago on one of my sites, and it won't complete. It's only 150 pages -
Product Support | | PaulBarrs0 -
Which URL should I add at general settings?
Hi,I would like to double check something. We have set up our site to be international with multiple language. However we have at the moment only 1 language live. SO when you go to www.yourzite.com, you will be redirected to www.yourzite.com/nl/.Which URL should I add at the general campaign settings. I have now added www.yourzite.com/nl/, but it shows that I don't have any incoming links or domain authority. Should I change the link to yourzite.com? That is also the link that Google shows in the search results. Regards Jack
Product Support | | YourZite.com0 -
Why is comment moderation blocking all my comments? There are no links in them.
For the past few days, every time I try posting a comment, no matter how long or short, how informative or simple, I've been getting the error message that the comment isn't being allowed. I only tried a few times, once on several different posts (not spamming, God forbid!), and I never include links in my comments... So what's wrong?! So frustrated, I am so not up to this right now. Why can't I share my insights and questions on Moz anymore? Gaaaaaaa!
Product Support | | whiteonlySEO1 -
Both campaigns are now useless due to URL rewrite?
I have two campaigns on Moz and they were doing fine until I made the decision to rewrite my URL to remove www so, www.thing.com becomes thing.com Moz sees this as a error it seems and I am now getting error code 902. I tried to change my campaign setting but it won't let me change the URL because it's got historical information that doesn't pertain I guess. What should I do? Was it a mistake to remove the www? Thanks for any advise, Greg
Product Support | | Banknotes0 -
200 mozpoints : Removal of "nofollow" from first custom URL on profile but first link is nofollow...
Hello, I try to help Moz community and hope to get 200 mozpoints a day 🙂 When I analyze a profile from a member with more than 200 mozpoints, I see two links to custom URL link : one is added nofollow on a blank image, and one follow with anchor text. But it appears that the tests show that in this case the second link is not taken into account... Why not remove the first link so ? Here is the code: xxx: http://www.example.com/
Product Support | | Bigb060 -
Why can I not crawl this site
I wanted to add this site as new campaign: new.kbc.be But it won't accept it. Why?
Product Support | | KBC0 -
Crawl Limit Question
I'm a little confused as to how the crawl limit works. Since there seems to be a 10K per week max, the crawl limit can't be per week, so what is the time period? Also, does that include crawling sites entered as competitors? Right now I'm at 14/25 sites and most of them are under 1,000 pages so I'm not sure how I hit that limit (other than a one-time spike of 28,000 in November).
Product Support | | David_Moceri0 -
Duplicate Content Report: Duplicate URLs being crawled with "++" at the end
Hi, In our Moz report over the past few weeks I've noticed some duplicate URLs appearing like the following: Original (valid) URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green Duplicate URL: http://www.paperstone.co.uk/cat_553-616_Office-Pins-Clips-and-Bands.aspx?filter_colour=Green**++** These aren't appearing in Webmaster Tools, or in a Screaming Frog crawl of our site so I'm wondering if this is a bug with the Moz crawler? I realise that it could be resolved using a canonical reference, or performing a 301 from the duplicate to the canonical URL but I'd like to find out what's causing it and whether anyone else was experiencing the same problem. Thanks, George
Product Support | | webmethod0