Does Google bot read embedded content?
-
Is embedded content "really" on my page?
There are many addons nowadays that are used by embedded code and they bring the texts after the page is loaded.
For example - embedded surveys.
Are these read by the Google bot or do they in fact act like iframes and are not physically on my page?
Thanks
-
If you look at most of the Facebook comment implementations, they're usually embedded with an iframe.
Technically speaking, that is making the content load from another source (not on your site).
As we're constantly seeing Google evolve with regard to "social signals", however, I suspect embedded Facebook comments may begin to have an impact if they pertain to content that is actually located on your website.
-
Thanks!
I'm guessing it will remain a no for me since it is third party scripts - a black box for that matter.
What do you think about Facebook comments then?
Not readable as well? -
I didn't see any recent test for 2013, but it's been analyzed quite a bit, and the 2 links below expand a bit on what I mentioned.
The conclusion on the first one below is that it won't index content loaded dynamically from a javascript file on another server/domain.
http://www.seomoz.org/ugc/can-google-really-access-content-in-javascript-really
Here's the link that talks about extra programming necessary to make AJAX content crawlable and indexable.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=174992
-
Thank you all.
Here is an example from survey monkey:
There many other tools that look quite the same.
The content it loads is not visible in the view source.
-
Googlebot has become extremely intelligent since its inception, and I'd guess that most members here would probably agree that it's gotten to the point where it can detect virtually any type of content on a page.
For the purposes of analyzing the actual content that it indexes and uses for ranking / SEO, however, I'd venture to guess that the best test would be viewing the page source after the page has loaded.
If you can see the content you're questioning in the actual HTML, then Google will probably index it, and use it considerably for ranking purposes.
On the other hand, if you just see some type of javascript snippet / function where the content would otherwise be located in the page source, Google can probably read it, but won't likely use it heavily when indexing and ranking.
There are special ways to get Google to crawl such content that is loaded through javascript or other types of embeds, but it's been my experience that most embeds are not programmed this way by default.
-
Is it's easier to analyze if you have an example URL. These can be coded many different ways and a slight change can make a difference.
-
What language is the code of the embedded survey?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Which URL should I choose when combining content?
I am combining content from two similar articles into one. URL 1 has a featured snippet and better URL structure, but only 5,000 page views in the last 6 month, and has 39 keywords ranking in the top 10. URL 2 has worse structure, but over 100k page views in the last 6 months, and 236 keywords in the top 10. Basically, I'm wondering if I keep the one with the better URL structure or the one with more traffic. The deleted URL will be redirected to whichever I keep.
Intermediate & Advanced SEO | | curtis-yakketyyak0 -
Duplicate content on subdomains
Hi All, The structure of the main website goes by http://abc.com/state/city/publication - We have a partnership with public libraries to give local users access to the publication content for free. We have over 100 subdomains (each for an specific library) that have duplicate content issues with the root domain, Most subdomains have very high page authority (the main public library and other local .gov websites have links to this subdomains).Currently this subdomains are not index due to the robots text file excluding bots from crawling. I am in the process of setting canonical tags on each subdomain and open the robots text file. Should I set the canonical tag on each subdomain (homepage) to the root domain version or to the specific city within the root domain? Example 1:
Intermediate & Advanced SEO | | NewspaperArchive
Option 1: http://covina.abc.com/ = Canonical Tag = http://abc.com/us/california/covina/
Option 2: http://covina.abc.com/ = Canonical Tag = http://abc.com/ Example 2:
Option 1: http://galveston.abc.com/ = Canonical Tag = http://abc.com/us/texas/galveston/
Option 2: http://galveston.abc.com = Canonical Tag = http://abc.com/ Example 3:
Option 1: http://hutchnews.abc.com/ = Canonical Tag = http://abc.com/us/kansas/hutchinson/
Option 2: http://hutchnews.abc.com/ = Canonical Tag = http://abc.com/ I believe it makes more sense to set the canonical tag to the corresponding city (option 1), but wondering if setting the canonical tag to the root domain will pass "some link juice" to the root domain and it will be more beneficial. Thanks!0 -
Content Above The Fold (strategies)
Does anyone know if using a wide responsive layout that brings content well above the fold on big screens (but still pushes it down on small screens or mobile devices) is a good option? We have an adsense site that just got destroyed and I'm assuming its this new Google algo that's looking at sites with too big of ads above the fold.
Intermediate & Advanced SEO | | iAnalyst.com0 -
Google local pointing to Google plus page not homepage
Today my clients homepage dropped off the search results page (was #1 for months, in the top for years). I noticed in the places account everything is suddenly pointing at the Google plus page? The interior pages are still ranking. Any insight would be very helpful! Thanks.
Intermediate & Advanced SEO | | stevenob0 -
Google Indexing Feedburner Links???
I just noticed that for lots of the articles on my website, there are two results in Google's index. For instance: http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html and http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+thewebhostinghero+(TheWebHostingHero.com) Now my Feedburner feed is set to "noindex" and it's always been that way. The canonical tag on the webpage is set to: rel='canonical' href='http://www.thewebhostinghero.com/articles/tools-for-creating-wordpress-plugins.html' /> The robots tag is set to: name="robots" content="index,follow,noodp" /> I found out that there are scrapper sites that are linking to my content using the Feedburner link. So should the robots tag be set to "noindex" when the requested URL is different from the canonical URL? If so, is there an easy way to do this in Wordpress?
Intermediate & Advanced SEO | | sbrault740 -
Google plus
"Google+ members, and to a lesser extent others who are signed into Google, will be able to search against both the broader web and their own Google+ social graph. That’s right; Google+ circles, photos, posts and more will be integrated into search in ways other social platforms can only dream about." What is meant by " and to a lesser extent others who are signed into Google" ? Does it mean that non-google plus members won't be able to view Google+photos, posts ?
Intermediate & Advanced SEO | | seoug_20050 -
Duplicate content
Is there manual intervention required for a site that has been flagged for duplicate content to get back to its original rankings, once the duplicated content has been removed? Background: Our site recently experienced a significant drop in traffic around the time that a chunk of content from other sites (ie. duplicate) went live. While it was not an exact replica of the pages on other sites, there was quite a bit of overlap. That content has since been removed, but our traffic hasn't improved. What else can we do to improve our ranking?
Intermediate & Advanced SEO | | jamesti0 -
Should I do something about this duplicate content? If so, what?
On our real estate site we have our office listings displayed. The listings are generated from a scraping script that I wrote. As such, all of our listings have the exact same description snippet as every other agent in our office. The rest of the page consists of site-wide sidebars and a contact form. The title of the page is the address of the house and so is the H1 tag. Manually changing the descriptions is not an option. Do you think it would help to have some randomly generated stuff on the page such as "similar listings"? Any other ideas? Thanks!
Intermediate & Advanced SEO | | MarieHaynes0