Googlebot size limit
-
Hi there,
There is about 2.8 KB of java script above the content of our homepage. I know it isn't desirable, but is this something I need to be concerned about?
Thanks,
SarahUpdate: It's fine. Ran a Fetch as Google and it's rendering as it should be. I would delete my question if I could figure out how!
-
Agreed. Besides, maybe someone (a newbie like me!) with the same question could see how I figured it out, then try it on their own. Or someone can see what I did and say "wait, that's not right ... ".
I think it comes from my mentality of not to wanting waste people's time on questions I found the answer to - but, yes, we wouldn't want to punish the people putting time into answering, especially when it can help someone else. Thanks for bringing that up, Keri!
-
I would agree. Delete option is not necessary.
-
Roger is very reluctant to delete questions, and feels that it most cases, it's not TAGFEE to do so. Usually by the time the original poster wants to delete a question, there are multiple responses, and deleting the questions would also remove the effort the other community members have put in to answer the question, and remove the opportunity for other people to learn from the experience.
-
Haven't figured that one out either :). Apparantly Roger Mozbot does not like questions being deleted , only edited:)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site not getting indexed by googlebot.
The following question is in regards to http://footeschool.org/. This site is not getting indexed with google(googlebot) This only happens when the user agent is set googlebot. This is a recent issue. We are using DNN as CMS. Are there any suggestion to help resolve this issue?
Technical SEO | | bcmull0 -
Mobilegeddon Help - Googlebot Mobile cHTML & Mobile: XHTML/WML
Our website is (www.billboard.com) and we have a mobile website on a sub-domain (www.m.billboard.com). We are currently only redirecting Googlebot Type "Mobile: Smartphone" to our m.billboard.com domain. We are not redirecting Googlebot Mobile: cHTML & Mobile: XHTML/WML Using this URL as an example: http://www.billboard.com/articles/news/1481451/ravi-shankar-to-receive-lifetime-achievement-grammy, I fetched the URL via Google webmaster tools: http://goo.gl/8m4lQD As you can see only the 3rd Googlebot mobile was redirected, while the first 2 Google bot mobile spiders resolved 200 for the desktop page. QUESTION: could this be hurting our domain / any page that is not redirecting properly post mobilegeddon?
Technical SEO | | Jay-T0 -
Temporarily suspend Googlebot without blocking users
We'll soon be launching a redesign, on a new platform, migrating millions of pages to new URLs. How can I tell Google (and other crawlers) to temporarily (a day or two) ignore my site? We're hoping to buy ourselves a small bit of time to verify redirects and live functionality before allowing Google to crawl and index the new architecture. GWT's recommendation is to 503 all pages - including robots.txt, but that also makes the site invisible to real site visitors, resulting in significant business loss. Bad answer. I've heard some recommendations to disallow all user agents in robots.txt. Any answer that puts the millions of pages we already have indexed at risk is also a bad answer. Thanks
Technical SEO | | lzhao0 -
Limit for words on a page?
Some SEOs might say that there is no such thing as too much good content. Do you try to limit the words on a page to under a certain number for any reason?
Technical SEO | | Charlessipe0 -
Why is either Rogerbot or (if it is the case) Googlebots not recognizing keyword usage in my body text?
I have a client that does liposuction as one of their main services, they have been ranked in the top 1-5 for their keywords "sarasota liposuction" with different variations of the words for a long time, and suddenly have dropped about 10-12 places down to #15 in the engine. I went to investigate this and actually came to the "on-page analysis" tool for SEOmoz pro, where oddly enough it says that there is no mention of the target keyword in the body content (on-page analysis tool screenshot attached). I didn't quite understand why it would not recognize the obvious keywords in the body text so I went back to the page and inspected further. The keywords have an odd featured link that links up to an internally hosted keyword glossary for definitions of terms that people might not know directly. These definitions pop up in a lightbox upon clicking the keyword (liposuction lightbox screenshots attached). I have no idea why google would not recognize these words as they have the text in between the link, yet if there is something wrong with the code syntax etc. it might possibly hender the engine from seeing the body text of the link? any help would be greatly appreciated! Thank you so much! Phn2m Phn2m.png bWr5K.png V36CL.png
Technical SEO | | jbster130 -
Image Size for SEO
Hi there I have a website which has some png images on pages, around 300kb - is this too much? How many kbs a page, to what extent do you know does Google care about page load speed? is every kb important, is there a limit? Any advice much appreciated.
Technical SEO | | pauledwards0 -
Images on page appear as 404s to Googlebot
When I fetch my website as Googlebot it returns 404s for all the images on the page. This despite the fact that each image is hyperlinked! What could be causing this issue? Thanks!
Technical SEO | | Netpace0 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0