The "webmaster" disallowed all ROBOTS to fight spam! Help!!
-
One of the companies I do work for has a magento site. I am simply the SEO guy and they work the website through some developers who hold access to their systems VERY tightly. Using Google Webmaster Tools I saw that the robots.txt file was blocking ALL robots.
I immediately e-mailed out and received a long reply about foreign robots and scrappers slowing down the website. They told me I would have to provide a list of only the good robots to allow in robots.txt.
Please correct me if I'm wrong.. but isn't Robots.txt optional?? Won't a bad scrapper or bot still bog down the site? Shouldn't that be handled in httaccess or something different?
I'm not new to SEO but I'm sure some of you who have been around longer have run into something like this and could provide some suggestions or resources I could use to plead my case!
If I'm wrong.. please help me understand how we can meet both needs of allowing bots to visit the site but prevent the 'bad' ones. Their claim is the site is bombarded by tons and tons of bots that have slowed down performance.
Thanks in advance for your help!
-
Thanks for the suggestions!! I'll keep you updated.
-
You can get the list of good robots from the list at Robotstxt.org: http://www.robotstxt.org/db.html.
I'd recommend creating an edited version of the robots.txt file yourself, specifically Allowing googlebot and others. Then send that with a link to the robotstxt.org site.
You may need to get the business owners involved. IT exists to enable the business, not strap it down so it can't move.
-
What you could do is just add Allow statements for the different Googlebots and the bots of other search engines. This will probably make the developers happy so they can keep other bots out of the door (although I doubt this would work and definitely don't think that this should be the option to keep spammers away, but that says more about the quality of development ;-)).
-
Yes, there are a ton of bad bots one may want to block. Can you show us the robots.txt file? If they aren't blocking legit search engine bots, you're probably okayish. If they are actually blocking all bots, you have cause for concern.
Can you give us a screenshot from GWT?
I use a program called Screaming Frog daily. It's not malicious, off the shelf. I just want to crawl and gather meta data. I can tell it to disregard robots.txt. It will crawl a site until it hit's something password protected. There's not much any robots.txt can do about it, as it can also spoof user agents.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does redirecting from a "bad" domain "infect" the new domain?
Hi all, So a complicated question that requires a little background. I bought unseenjapan.com to serve as a legitimate news site about a year ago. Social media and content growth has been good. Unfortunately, one thing I didn't realize when I bought this domain was that it used to be a porn site. I've managed to muck out some of the damage already - primarily, I got major vendors like Macafee and OpenDNS to remove the "porn" categorization, which has unblocked the site at most schools & locations w/ public wifi. The sticky bit, however, is Google. Google has the domain filtered under SafeSearch, which means we're losing - and will continue to lose - a ton of organic traffic. I'm trying to figure out how to deal with this, and appeal the decision. Unfortunately, Google's Reconsideration Request form currently doesn't work unless your site has an existing manual action against it (mine does not). I've also heard such requests, even if I did figure out how to make them, often just get ignored for months on end. Now, I have a back up plan. I've registered unseen-japan.com, and I could just move my domain over to the new domain if I can't get this issue resolved. It would allow me to be on a domain with a clean history while not having to change my brand. But if I do that, and I set up 301 redirects from the former domain, will it simply cause the new domain to be perceived as an "adult" domain by Google? I.e., will the former URL's bad reputation carry over to the new one? I haven't made a decision one way or the other yet, so any insights are appreciated.
Intermediate & Advanced SEO | | gaiaslastlaugh0 -
Site not showing up in search - was hacked - huge comment spam - cannot connect Webmaster tools
Hi Moz Community A new client approached me yesterday for help with their site that used to rank well for their designated keywords, but now is not doing well. Actually, they are not on Google at all. It's like they were removed by Google. There are not reference to them when searching with "site: url". I investigated further and discovered the likely problem . . . 26 000 spam comments! All these comments have been removed now. I clean up this Wordpress site pretty well. However, I want to connect it now to Google webmaster tools. I have admin access to the WP site, but not ftp. So I tried using Yoast to connect. Google failed to verify the site. So the I used a file uploading console to upload the Google html code instead. I check that the code is there. And Google still fails to verify the site. It is as if Google is so angry with this domain that they have wiped it completely from search and refuse to have any dealings with it at all. That said, I did run the "malware" check or "dangerous content" check with them that did not bring back any problems. I'm leaning towards the idea that this is a "cursed" domain in Google and that my client's best course of action is to build her business around and other domain instead. And then point that old domain to the new domain, hopefully without attracting any bad karma in that process (advice on that step would be appreciated). Anyone have an idea as to what is going on here?
Intermediate & Advanced SEO | | AlistairC0 -
"Unnatural links to your site" manual action by Google
Hi, My site has been hit by a "Unnatural links to your site" manual action penalty and I've just received a decline on my 2nd reconsideration request, after disavowing even more links than I did in the first request. I went over all the links in WMT to my site with an SEO specialist and we both thought things have been resolved but apparently they weren't. I'd appreciate any help on this so as to lift the penalty and get my site back to its former rankings, it has ranked well before and the timing couldn't have been worse. Thanks,
Intermediate & Advanced SEO | | ishais
Yael0 -
Help in Internal Links
Which link attribute should be given to internal links of website? Do follow or No follow and why?
Intermediate & Advanced SEO | | Obbserv0 -
Rankings dropped off a cliff. Webmaster tools message: No manual spam actions found. Now what?
A week ago my rankings for http://www.top-10-dating-reviews.com (some adult content) dropped and I'm now getting now impressions. I submitted a reconsideration request as I was sure I hadn't violated any rules and today the reply was that no manual spam actions were found. Te email goes onto say there are a variety of other things that could affect rankings such as site architecture, not being able to crawl and algo changes. As far as I'm aware all these issues are fine. I'm not aware of any algo updates last weekend. my question is what can I do now? I need to get my rankings back but there's nothing wrong with my site or practices.
Intermediate & Advanced SEO | | SamCUK0 -
Does "Noindex" lead to Loss of Link Equity?
Our company has two websites with about 8,000 duplicate articles between them. Yep, 8,000 articles were posted on both sites over the past few years. This is the definition of cross-domain duplicate content. Plan A is to set all of the articles to "noindex,follow" on the site that we care less about (site B). We are not redirecting since we want to keep the content on that site for on-site traffic to discover. If we do set them to "noindex," my concern is that we'll lose massive amounts of link equity acquired over time...and thus lose domain authority...thus overall site rankability. Does Google treat pages changed to "noindex" the same as 404 pages? If so, then I imagine we would lose massive link equity. Plan B is to just wait it out since we're migrating site B to site A in 6-9 months, and hope that our more important site (site A) doesn't get a Panda penalty in the meantime. Thoughts on the better plan?
Intermediate & Advanced SEO | | M_D_Golden_Peak0 -
Google Said "Repeat the search with the omitted results included."
We have some pages targeting the different countries but with the Near to Similar content/products, just distinguished with the country name etc. one of the page was assigned to me for optimizing. two or three Similar pages are ranked with in top 50 for the main keyword. I updated some on page content to make it more distinguish from others. After some link building, I found that this page still not showing in Google result, even I found the following message on the google. "In order to show you the most relevant results, we have omitted some entries very similar to the 698 already displayed.
Intermediate & Advanced SEO | | alexgray
If you like, you can repeat the search with the omitted results included." I clicked to repeat omitted result and found that my targeted url on 450th place in google (before link building this was not) My questions are Is google consider this page low quality or duplicate content? Is there any role of internal linking to give importance a page on other (when they are near to similar)? Like these pages can hurt the whole site rankings? How to handle this issue?0 -
What does "base" link mean here?
On http://www.google.com/support/webmasters/bin/answer.py?answer=139394, it says: rel="canonical" can be used with relative or absolute links, but we recommend using absolute links to minimize potential confusion or difficulties. If your document specifies a base link, any relative links will be relative to that base link. Where would a document specify a base link? And how?
Intermediate & Advanced SEO | | nicole.healthline0