Moz campaign works around my robots.txt settings
-
My robots.txt file looks like this:
User-agent: *
Disallow: /*?
Disallow: /search
So, it should block (deindex) all dynamic URLs.
If I check this url in Google:
site:http://www.webdesign.org/search/page-1.html?author=47
Google tells me:
A description for this result is not available because of this site's robots.txt – learn more.
So far so good.
Now, I ran a Moz SEO campaign and I got a bunch of duplicate page content errors.
One of the links is this one:
http://www.webdesign.org/search/page-1.html?author=47
(the same I tested in Google and it told me that the page is blocked by robots.txt which I want)
So, it makes me think that Moz campaigns check files regardless of what robots.txt say? It’s my understanding User-agent: * should forbid Rogerbot from crawling as well. Am I missing something?
-
That worked, thanks!
-
Thanks Abe.
I guess I'll try this:
Useragent: Rogerbot
Disallow: /*?
Because if I use Disallow: / I'll lose my current Moz reports because Rogerbot will just ignore all my file, right?
-
Hello Vince, thank you for reaching out to us! This seems quite odd, our crawler usually obeys all robots.txt files. Let's try this. Add this code to your robots.txt:
Useragent: Rogerbot
Disallow: /
This should specifically instruct us to follow these rules. Once you have tried this, if it does not work, please send an email to help@moz.com and we will have our engineers dig in a bit further. Sorry for the inconvenience, I hope the above fix works for you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Titles Shown in Moz Analytics
Hello, My Moz analytics show over 50 duplicate titles. This because my website title is shown on each page (eg About us | Social Engagement, Contact us | Social Engagement, etc.) Is this really an error to be fixed, that otherwise will impact rankings? Or can I leave it as it is? I don't mind having my website title on each page, but if its worth removing it due to duplicate issues impacting ranking, I will remove 🙂 In this case, does someone have any suggestion on how to do this? Which files should I modify? (Im using Wordpress CMS, and I dont see each page code) Thanks in advance.
Moz Pro | | socialengaged
Regards, Eugenio | Social Engagement (http://socialengagement.it)0 -
Stupid question: why SEOMOz has become MOZ?
According to their On-Page Report Card the URL should include the main keyword i.e. SEO in their case, so why do they get an F grade If I check their main page www.moz.com for the keyword SEO? Thanks for your comments!
Moz Pro | | mcany0 -
First Campaign: No email when each step is completed?
I joined SEOMoz 6 days ago and set up my first campaign straight away. On my Campaign Overview, it says: "Here's what you can expect over the coming days (we'll send you an email when each step is completed". But I've not received an email saying that any of the steps are completed, so I haven't really used any of the tools yet as I wanted to wait until I had something to work with 🙂 Since it's almost been 7 days (will be 7 tomorrow), do I assume that most if not all of the stages have now been completed and I can get started? Thanks, Iain.
Moz Pro | | iainmoran0 -
How Tro transfer Keywords from one campaign to the other
I have defined a list of keywords (along with labels) in a campaign that I no longer use. I would like to use the same keywords along with the brands and labels in a new campaign . Is there an automated way to do this ? Or do I have to do it manually one after the other ?
Moz Pro | | Chaits0 -
How to remove URLS from from crawl diagnostics blocked by robots.txt
I suddenly have a huge jump in the number of errors in crawl diagnostics and it all seems to be down to a load of URLs that should be blocked by robots.txt. These have never appeared before, how do I remove them or stop them appearing again?
Moz Pro | | SimonBond0 -
Campaign Reports Question
Hello everyone. I'm new to SEOMOZ and sort of a beginner with this SEO stuff but this does seem like the place to be to learn things. I spent quite a bit of time reading the blogs and found them very helpful - including everyone's comments. I'm not sure what to make of a couple of my campaign reports. I'm going through my campaign reports and have a few questions. I guess the easy way is to break this down. Question 1: Crawl Errors: I"m getting duplicate content/page errors for the following pages. http://www.njsigncrafters.com and http://www.njsigncrafters.com/index.php Am I losing a bit of google points because of this and do I fix it by using a redirect? Question 2: When I look at my on-page analysis I'm getting grades of A-F. I'm figuring maybe I should have some recommendations about what to do but SEOMOZ gives me a grade and then leaves it at that. Besides reviewing my on-page factors am I missing something simple that I"m supposed to click on to see recommendations? I suppose once I get used to the way things work I'll get this SEOMOZ site figured out.
Moz Pro | | dpdeleon10 -
How do we use SEOmoz to track Local Searches? Is there a way to set the location from which the campaign tracker is "searching"?
It seems that there is no way to set a parameter for location. In Places, I'm able to define my targeted region. How does SEOmoz mimic that localization? Thank for any help! -- Chris
Moz Pro | | ChrisPalle1 -
Is the Open Site Explorer Tool working right now?
Is the Open Site Explorer Tool working right now? I keep getting directed to a sign-up page.
Moz Pro | | LarryEngel0