Restrict rogerbot for few days
-
Hi Team,
I have a subdomain that built in Zendesk's CRM system. Now, I want to restrict Moz crawler (rogerbot) for crawling this complete subdomain for a few days, but I am not able to edit the robots.txt file of the subdomain, because this is a shared file and Zendesk is not allowing to edit it. Could you please let me know the alternative way to restrict rogerbot to crawl this subdomain?
I am eagerly awaiting your quick response.
Thanks
-
Do you have FTP access and / or Yoast installed (if this is a WordPress website)?
Thanks,
Zack -
Not from what I could think about for how a specific user-agent is being treated just like you would want with Rogerbot.
-
Is there any way to restrict via Google Tag Manager?
-
Hey there!
Thanks for reaching out to us!
Unfortunately you would need access to the robots.txt file in order to block us from crawling a certain subdomain. I'm sorry I can't be of anymore help here. Please do reach out to help@moz.com if you have any further questions,
Best,
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rogerbot does not catch all existing 4XX Errors
Hi I experienced that Rogerbot after a new Crawl presents me new 4XX Errors, so why doesn't he tell me all at once? I have a small static site and had 9 crawls ago 10 4XX Errors, so I tried to fix them all.
Moz Pro | | inlinear
The next crawl Rogerbot fount still 5 Errors so I thought that I did not fix them all... but this happened now many times so that I checked before the latest crawl if I really fixed all the errors 101%. Today, although I really corrected 5 Errors, Rogerbot digs out 2 "new" Errors. So does Rogerbot not catch all the errors that have been on my site many weeks before? Pls see the screenshot how I was chasing the errors 😉 404.png0 -
Data Update for RogerBot
Hi, I noticed that rogerbot still give me 404 for http://www.salustore.com/capelli/nanogen-acquamatch.html refferal form http://www.salustore.com/protocollo-nanogen even I made changes since a couple of week. Same error with one "Title Element Too Short" on our site. Any suggestion on how to refresh it? Best Regards n.
Moz Pro | | nicolobottazzi0 -
My Website is not being crawled by Google last 15 days
Hello, Required emergency help, one of client website http://www.mediaxpress.net/ is not being crawled since last 15 days, What can be the reason. I have Checked Google Webmaster Tools, it not showing an Issue and even SEOmoz crawler is crawling it normally. I have updated some data on pages, and i want to get it crawled by Google. But Google is not crawling in anyway.
Moz Pro | | CommercePundit0 -
Crawl Test has taken over 5 days and still has yet to complete
I am running some crawls on some sites and I have a number still pending. I have one from 7 days ago, a couple from 6 days ago, and 1 from 5 days ago. The confusing thing is that I have run a few others in that same period that have finished already. Do I need to restart the crawls or cancel them and start over?
Moz Pro | | DRSearchEngOpt0 -
Crawl Test taking 10+ days and still "In Progress" - normal or glitch?
I started a crawl test for my large site - WallStreetOasis.com - on June 20 and still have not received my results. It still says "Crawl in Progress" 10 days later. Does this seem odd or problematic, or is this normal?
Moz Pro | | WallStreetOasis.com0 -
Open Site Explorer csv report pending since 2 days!
I have tried to download a csv report from Open Site Explorer. But it has been pending since 2 days. Why do these reports take so much time? When can I expect to get this report?
Moz Pro | | ShashiR0 -
Rogerbot Ignoring Robots.txt?
Hi guys, We're trying to block Rogerbot from spending 8000-9000 of our 10000 pages per week for our site crawl on our zillions of PhotoGallery.asp pages. Unfortunately our e-commerce CMS isn't tremendously flexible so the only way we believe we can block rogerbot is in our robots.txt file. Rogerbot keeps crawling all these PhotoGallery.asp pages so it's making our crawl diagnostics really useless. I've contacted the SEOMoz support staff and they claim the problem is on our side. This is the robots.txt we are using: User-agent: rogerbot Disallow:/PhotoGallery.asp Disallow:/pindex.asp Disallow:/help.asp Disallow:/kb.asp Disallow:/ReviewNew.asp User-agent: * Disallow:/cgi-bin/ Disallow:/myaccount.asp Disallow:/WishList.asp Disallow:/CFreeDiamondSearch.asp Disallow:/DiamondDetails.asp Disallow:/ShoppingCart.asp Disallow:/one-page-checkout.asp Sitemap: http://store.jrdunn.com/sitemap.xml For some reason the Wysiwyg edit is entering extra spaces but those are all single spaced. Any suggestions? The only other thing I thought of to try is to something like "Disallow:/PhotoGallery.asp*" with a wildcard.
Moz Pro | | kellydallen0 -
SEOmoz Q&A down last few days
It seems the Q&A section had some issues since the 25th. Users could post new Q&As but they were not visible to most users. Roger was caught slacking! The issue appears to be resolved at this time. I just wanted to share to anyone who asked a question the past few days who did not receive a response you may wish to repost your question as many readers will not go back and check questions from prior days.
Moz Pro | | RyanKent1