Questions created by lzhao
-
Would you rate-control Googlebot? How much crawling is too much crawling?
One of our sites is very large - over 500M pages. Google has indexed 1/8th of the site - and they tend to crawl between 800k and 1M pages per day. A few times a year, Google will significantly increase their crawl rate - overnight hitting 2M pages per day or more. This creates big problems for us, because at 1M pages per day Google is consuming 70% of our API capacity, and the API overall is at 90% capacity. At 2M pages per day, 20% of our page requests are 500 errors. I've lobbied for an investment / overhaul of the API configuration to allow for more Google bandwidth without compromising user experience. My tech team counters that it's a wasted investment - as Google will crawl to our capacity whatever that capacity is. Questions to Enterprise SEOs: *Is there any validity to the tech team's claim? I thought Google's crawl rate was based on a combination of PageRank and the frequency of page updates. This indicates there is some upper limit - which we perhaps haven't reached - but which would stabilize once reached. *We've asked Google to rate-limit our crawl rate in the past. Is that harmful? I've always looked at a robust crawl rate as a good problem to have. Is 1.5M Googlebot API calls a day desirable, or something any reasonable Enterprise SEO would seek to throttle back? *What about setting a longer refresh rate in the sitemaps? Would that reduce the daily crawl demand? We could set increase it to a month, but at 500M pages Google could still have a ball at the 2M pages/day rate. Thanks
Intermediate & Advanced SEO | | lzhao0 -
Disallow statement - is this tiny anomaly enough to render Disallow invalid?
Google site search (site:'hbn.hoovers.com') indicates 171,000 results for this subdomain. That is not a desired result - this site has 100% duplicate content. We don't want SEs spending any time here. Robots.txt is set up mostly right to disallow all search engines from indexing this site. That asterisk at the end of the disallow statement looks pretty harmless - but could that be why the site has been indexed? User-agent: * Disallow: /*
Technical SEO | | lzhao0 -
Are URL suffixes ignored by Google? Or is this duplicate content?
Example URLs: www.example.com/great-article-on-dog-hygiene.html www.example.com/great-article-on-dog-hygiene.rt-article.html My IT dept. tells me the second instance of this article would be ignored by Google, but I've found a couple of instances in which Google did index the 'rt-article.html' version of the page. To be fair, I've only found a couple out of MANY. Is it an issue? Thanks, Trisha
Web Design | | lzhao0 -
Temporarily suspend Googlebot without blocking users
We'll soon be launching a redesign, on a new platform, migrating millions of pages to new URLs. How can I tell Google (and other crawlers) to temporarily (a day or two) ignore my site? We're hoping to buy ourselves a small bit of time to verify redirects and live functionality before allowing Google to crawl and index the new architecture. GWT's recommendation is to 503 all pages - including robots.txt, but that also makes the site invisible to real site visitors, resulting in significant business loss. Bad answer. I've heard some recommendations to disallow all user agents in robots.txt. Any answer that puts the millions of pages we already have indexed at risk is also a bad answer. Thanks
Technical SEO | | lzhao0 -
To Reduce (pages)... or not to Reduce?
Our site has a large Business Directory with millions of pages. For examples' sake, let's say it's a directory of Restaurants. Each Restaurant has 4 pages on the site, each tied together through a row of tabs across the top of the page: Tab 1 - Basic super 7 info - name, location, contact info Tab 2 - Restaurant menu Tab 3 - Restaurant reviews Tab 4 - Photos of food The Tab 1 page generates 95% of our traffic, and 90% of conversions. The conversion rate on Tab 2 - Tab 4 pages is 6 - 10x greater than Tab 1 conversions. Total Conversions from search queries on menus, reviews and food are 20% higher than are conversions resulting from searches on restaurant name & info alone. We're working with a consultant on a redesign, who wants to consolidate the 4 pages into one. Their advice is to focus on making a better page, featuring all of the content, sacrifice a little organic traffic but make up any losses by improving conversion. My counterpoint is that we shouldn't scrap the Tab 2-4 pages just because they have lower traffic - we should make the pages BETTER. The content we display is thin, and we have plenty of data we could expose to make the pages more robust. By consolidating it will also be hard to optimize a page for people searching for name/location AND menu AND reviews AND photos. We're asking that one page to do too much, and it's likely we will see diminished search volume for queries on menu, reviews and food. I think the decline will be much more significant than the consultant estimates. The consultant says there will be little change to organic traffic. since Tab 1 already generates 95% of traffic. Through basic math, they're saying the risk is a 5% decline in organic traffic. Further, they see little chance of queries for menu, reviews, and food declining because most of those queries tend to send people too the home page or Tab 1 page anyway. Finally, the designer of the new wireframes admitted that potential organic traffic risks were not taken into consideration when they recommended consolidating the pages. I sincerely appreciate your thoughts and consideration! Trisha
On-Page Optimization | | lzhao0 -
Does Google respect User-agent rules in robots.txt?
We want to use an inline linking tool (LinkSmart) to cross link between a few key content types on our online news site. LinkSmart uses a bot to establish the linking. The issue: There are millions of pages on our site that we don't want LinkSmart to spider and process for cross linking. LinkSmart suggested setting a noindex tag on the pages we don't want them to process, and that we target the rule to their specific user agent. I have concerns. We don't want to inadvertently block search engine access to those millions of pages. I've seen googlebot ignore nofollow rules set at the page level. Does it ever arbitrarily obey rules that it's been directed to ignore? Can you quantify the level of risk in setting user-agent-specific nofollow tags on pages we want search engines to crawl, but that we want LinkSmart to ignore?
On-Page Optimization | | lzhao0 -
Should we use Google's crawl delay setting?
We’ve been noticing a huge uptick in Google’s spidering lately, and along with it a notable worsening of render times. Yesterday, for example, Google spidered our site at a rate of 30:1 (google spider vs. organic traffic.) So in other words, for every organic page request, Google hits the site 30 times. Our render times have lengthened to an avg. of 2 seconds (and up to 2.5 seconds). Before this renewed interest Google has taken in us we were seeing closer to one second average render times, and often half of that. A year ago, the ratio of Spider to Organic was between 6:1 and 10:1. Is requesting a crawl-delay from Googlebot a viable option? Our goal would be only to reduce Googlebot traffic, and hopefully improve render times and organic traffic. Thanks, Trisha
Technical SEO | | lzhao0