How much javascript does Googlebot read
-
We have a site where we have certain navigational links solely for the human user. These links help the user experience and lead to pages that we don't need crawled by googlebot. We have these links in javascript so if you disable javascript these links are invisible. Will these links be considered cloaking even though our intention is not to cloak but save our Google crawl for pages we do want indexed?
-
Hi CruiseControl, If you want to see how Google views your website you can download a tool called Lynx, Lynx is a text based browser which is very very similar to how Google's crawler views your website.
-
Thank you all for your input.
-
I wrote up a nice reply then decided to investigate a point and found a nice interview with Matt Cutts from 2010. The relevant quotes are:
Matt Cutts: For a while, we were scanning within JavaScript, and we were looking for links. Google has gotten smarter about JavaScript and can execute some JavaScript. I wouldn't say that we execute all JavaScript, so there are some conditions in which we don't execute JavaScript.
Eric Enge: If someone did choose to do that (JavaScript encoded links or use an iFrame), would that be viewed as a spammy activity or just potentially a waste of their time?
Matt Cutts: I am not sure that it would be viewed as a spammy activity, but the original changes to NoFollow to make PageRank Sculpting less effective are at least partly motivated because the search quality people involved wanted to see the same or similar linkage for users as for search engines. In general, I think you want your users to be going where the search engines go, and that you want the search engines to be going where the users go.
Article link: http://www.stonetemple.com/articles/interview-matt-cutts-012510.shtml
-
There are circumstances where you are allowed to use 'cloaking' as some very influential websites have done however in your particular situation a nofollow tag and noindex tag would be the 'normal' procedure.
Personally, I think it is a grey area. You are not using the javascript to hide content as such and provided you are clearly not trying to manipulate the system there should be no reason why you would be penalised for it.
-
I would say yes they are cloaked links. I would suggest using HTML links only for maximum juice and to not anger the Googlebot. Serving different content to the user with and without javascript is a no-no. As for your crawl budget - best practice is to use a nofollow tag on the link and a noindex on the target page if you don't want it in the SERPS.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Help with Getting Googlebot to See Google Charts
We received a message from Google saying we have an extremely high number of URLs that are linking to pages with similar or duplicate content. The main difference between these pages are the Google charts we use. It looks like Google isn't able to see these charts (most of the text are very similar) and the charts (lots of it) are the main differences between these pages. So my question is what is the best approach to allowing Google to see the data that exists in these charts? I read from here http://webmasters.stackexchange.com/questions/69818/how-can-i-get-google-to-index-content-that-is-written-into-the-page-with-javascr that a solution would be to have the text that is displayed on the charts coded into the html and hidden by CSS. I'm not sure but it seems like a bad idea to have it seen by Google but hidden to the user by CSS. It just sounds like a cloaking hack. Can someone clarify if this is even a solution or is there a better solution?
Technical SEO | | ERICompensationAnalytics1 -
Having javascript in the top of the source code
Dear Moz-community, In our company, we are torn about the influence of having a ton of javascript on the top of our source code - while our Tech guys are downplaying it's influence, us marketeers aren't quite sure. The link is here: view-source:http://www.bettingexpert.com/tips/football/italy/serie-a It is the javascript that is loaded right after the Would this be a problem with Google? Thank you very much,
Technical SEO | | BetterCollective
William0 -
I've consolidated other domains to a single one with 301 redirects, yet the new domain authority in MOZ is much less that the redirected ones. Is that right?
I'm trying to increase the domain authority of my main site, so decided to consolidate other sites. One of the other sites has a much higher domain authority, but I don't know why after a 301 redirect, the new site's domain authority hasn't changed on over a month. Does MOZ take account of thes types of things?
Technical SEO | | bytecgroup2 -
I have custom 404 page and getting so much 404 error on Google webmaster, what should i do?
I have a custom 404 page with popular post and category links in the page, everyday i have 404 crawl error on webmaster tools, what should i do?
Technical SEO | | rimon56930 -
Googlebot take 5 times longer to crawl each page
Hello All From about mid September my GWMT has show that the average time to crawl a page on my site has shot up from an average of 130ms to an average of 700ms and peaks at 4000ms. I have checked my server error logs and found nothing there, I have checked with the hosting comapny and there are no issues with the server or other sites on the same server. Two weeks after this my ranking fell by about 950 places for most of my keywords etc.I am really just trying to eliminate this as a possible cause, of these ranking drops. Or was it the Pand/ EMD algo that has done it. Many Thanks Si
Technical SEO | | spes1230 -
How valuable is content "hidden" behind a JavaScript dropdown really?
I've come across a method implemented by some SEO agencies to fill up pages with somehow relevant text and hide it behind a javascript dropdown. Does Google fall for such cheap tricks? You can see this method used on these pages for example (just scroll down to the bottom) - it's all in German, but you get the idea I guess: http://www.insider-boersenbrief.de/ http://www.deko-und-kerzenshop.de/ How is you experience with this way of adding content to a site? Do you think it is valuable or will it get penalised?
Technical SEO | | jfkorn0 -
301'ing googlebot
I have a client that has been 301’ing googlebot to the canonical page. This is because they have a cart_id and session parameters in urls. This is mainly from when googlebot comes in on a link that has these parameters in the URL, as they don’t serve these parameters up to googlebot at all once it starts to crawl the site.
Technical SEO | | AlanMosley
I am worried about cloaking; I wanted to know if anyone has any info on this.
I know that Google have said that doing anything where you detect goolgebots useragent and treat them different is a problem.
Anybody had any experience on this, I would be glad to hear.0