Will Google Count Links Loaded from JavaScript Files After the Page Loads
-
Hi,
I have a simple question. If I want to put an image with a link to another site like a banner ad on my page, but do not want it counted by Google. Can I simply load the link and banner using jQuery onload from a separate .js file?
The ideal result would be for Google to index a script tag instead of a link.
-
Good Answer. I completely abandoned the banner I was thinking of using. It was from one of those directories that will list your site for free if you show their banner on your site. Their code of course had a link to them with some optimized text. I was looking for a way to display the banner without becoming a link farm for them.
Then I just decided that I did not want that kind of thing on my site even if it is in a javascript onload event if Google is going to crawl it anyway, so I just decided not to add it.
Then I started thinking about user generated links. How could I let people cite a source in a way that the user can click on without exposing my site to hosting spammy links. I originally used an ASP.Net linkbutton with a confirm button extender from the AJAX Control ToolKit that would display the url and ask the user if they wanted to go there. Then they would click the confirm button and be redirected. The problem was that the URL of the page was in the head part of the DOM.
I replaced that with a feature using a modal popup that calls a javascript function when the link button is clicked. That function then makes an ajax call to a webservice that gets the link from the database. Then the javascript writes an iframe to a div in the modal's panel. The result should be the user being able to see the source without leaving the site, but a lot of sites appear to be blocking the frame by using stuff like X-Frame-Options, so I'm probably going to use a different solution that uses the modal without the iframe. I am thinking of maybe using something like curl to grab content from the page to write to the modal panel along with a clickable link. All of this of course after the user clicks the linkbutton so none of that will be in the source code when the page loads.
-
I think what we really need to understand is, what is the purpose of hiding the link from Google? If it's to prevent the discovery of a URL or prevent the indexation of a certain page (or set of pages) - it's easier to achieve the same thing by using Meta no-index directives or wildcard-based robots.txt rules or by simply denying Gooblebot's user-agent, access to certain pages entirely
Is is that important to hide the link, or is it that you want to prevent access to certain URLs from within Google's SERPs? Another option is obviously to block users / sessions referred from Google (specifically) from accessing the pages. There's lots can be done, but a bit of context would be cool
By the way, no-follow does not prevent Google from following links. It actually just stops PageRank from passing across. I know, it was named wrong
-
What about a form action? Where instead of an a element with a href attribute you add a form element with an action attribute to what the href would be in a link.
-
Thanks for that answer. You obviously know a lot about this issue. I guess they would be able to tell if the .js script file creates an a element with a specific href attribute and then add that element to a specific div tag after the page loads.
It sounds like it might be easier just to nofollow those links instead of going to all the trouble to redirect the .js file whenever Google Bot crawls the page. I fear that could be considered cloaking.
Another possibility would be a an alert that requires a user interaction before grabbing a url from a database. The user would click on the link without an href, the javascript onclick fires, the javascript grabs the the url from a database, the user is asked to click a button if they want to proceed, and then the user is redirected to the external url. That should keep the external URL out of the script code.
-
Google can crawl JavaScript and its contents, but most of the time they are unlikely to do so. In order to do this, Google has to do more than just a basic source code scrape. Like everyone else seeking to scrape data from inside of generated elements, Google has to actually check the modified source-code, after all of the scripts have run (the render) rather than the base (non-modified) source code before any scripts fire
Google's mission is to index the web. There's no doubt that, non-rendered crawls (which do not contain the generated HTML output of scripts) can be done in a fraction of the time it takes to get a rendered snapshot of the page-code. On average I have found rendered crawling to take 7x to 10x longer than basic source scraping
What we have found is that Google are indeed, capable of crawling generated text and links and stuff... but they won't do this all the time, or for everyone. Those resources are more precious to Google and they crawl more sparingly in that manner
If you deployed the link in the manner which you have described, my anticipation is that Google would not notice or evaluate the link for a month or two (if you're not super popular). Eventually, they would determine the presence of the link - at which point it would be factored and / or evaluated
I suppose you could embed the script as a link to a '.js' module, and then use Robots.txt to ban Google from crawling that particular JavaScript file. If they chose to obey that directive, the link would pretty much remain hidden from them. But remember, it's only a directive!
If you wanted to be super harsh you could block Googlebot (user agent) from that JS file and do something like, 301 them to the homepage when they tried to access it (instead of allowing them to open and read the JS file). That would be pretty hardcore but would stand a higher chance of actually working
Think about this kind of stuff though. It would be pretty irregular to go to such extremes and I'm not certain what the consequences of such action(s) would be
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Totally inaccurate keyword count show on page grader
I've just published a detailed (3000+ words) blog post on AI music and what it means for musicians and artists. It is optimised for the term "AI music" and you can see it here: https://www.scamblermusic.com/ai-music-the-pros-and-cons-explained-by-ai/ When I search the source code of the blog post for "AI music" I see 19 references: code.png When I search the text in the browser window for "AI music" I see 12 references, yet when I run the Moz page grader to check my optimisation Moz downgrades the rating because it's counting 69 keywords: Moz.png I can't work out what Moz is seeing that I am not. Am I missing something really obvious, or is Moz just screwing up (something I haven't seen before with word count)?
On-Page Optimization | | JCN-SBWD0 -
How does the footer links impact the the pages SEO quality?
Hi, i want to ask a question. Does this kind of internal links will affect the SEO post quality? Please open the attachment (image) KskOg3U
On-Page Optimization | | joshuaong0 -
Why is my contact us page ranking higher than my home page?
Hello, It doesn't matter what keyword I put into Google (when I'm not signed in and have cleaned down my browsing history) the contact us page ranks higher than the home page. I'm not sure why this is, the home page has a higher page authority, more links and more social media shares, the website is an established one. When I have checked Google Analytics my home page gets more people landing on it than the contact us page. It looks like people are ignoring the contact us page and scrolling down until they find the home page. I'd appreciate any help or advice you might have. Thank you.
On-Page Optimization | | mblsolutions2 -
Too Many On-Page Links
Ok so I am very new to MOZ, and we have just set up our PRO account and campaign. Â When we got our crawl results we had 4 pages with "too many on-page links". Â All of these pages correspond with our blog, which is a wordpress hosted blog that is integrated into our site. Our site is www.moxicopy.com and the blog is www.moxicopy.com/blog I am confused on how we have over 100 on page links on these pages, as we have very few links on our blog.
On-Page Optimization | | Moxicopy.com0 -
Too Many on page links! Will "NoFollow" for navigation help?
I am getting to many on page links ( for all my pages). Here is my website: http://www.websterpowerproducts.co.uk I think it is to do with the the navigation bar down the right hand side. I don't really want to get ride of this as it offers users a way of getting where they want without lots of clicking. I was wondering if adding a "NoFollow" tag to each of they links would stop the link juice getting diluted by the navigation bar. Many Thanks
On-Page Optimization | | WebsterPowerTools0 -
How Google bot fetch a page like or share count?
As everybody know Facebook Likes has impact on serch results. But how can Google knows a page like or share count? Is it using an API or something?
On-Page Optimization | | mtb8880 -
Why does the on page report reports a full path link as Cannibalize link?
On the seomoz on page report i get a cannibalize error. This is due to a link being full path. When i change the link to relative path then there is no Cannibalize error. Should i change the internal links of the site to relative path? I would appreciate your help.
On-Page Optimization | | pickaweb0 -
Alternatives for having less then 100 links per page
Guys, I'm aware of the recomendation of having <100 links per page. The thing is I'm running a vacation rental website (my clients pay me to advertise their properties on my website). We use an AJAX interface with pagination to show the properties. So I have cities that have +400 properties on them... the pagination works fine but google can't crawl trough it (there is a google doc about making ajax systems crawlable, but that would invove a huge rewrite of our code and I dont understand how it helps the SEO). So my question is: what do I do to mantain each property having at least one link pointing to them at the same time that I keep the # of links in each page <100 ? Any suggestions ?
On-Page Optimization | | pqdbr0