Google Knowledge Graph related question
-
I have a client who is facing age discrimination in the film industry. (Big surprise there.) The problem is, when you type in his name, Google's new Knowledge Graph displays a brief bio about him to the right of the search results. This bio snippet includes his year of birth. Wikipedia is credited as the source for the bio information about him, and yet, his Wikipedia entry doesn't include his age or birth date. Neither does his iMDb bio. So the question is, How can he figure out where Google is getting that birthdate from? He wants to try and remove it, not falsify it. Thanks for any help you can offer.
-
Great answer, Ryan. And yes, I did a Google search on the client's name and his year of birth. After some digging, I did find a Freebase article on him that listed his year of birth, so I simply edited the date out. We'll see if that works.
Thanks so much for your fast and thorough response!
-
http://support.google.com/websearch/bin/answer.py?hl=en&answer=2620861
Here are just some of the sources for this web of information:
- online resources like Wikipedia
- subject-specific resources like Weather Underground for weather information and the World Bank for economic statistics
- publicly available data from Freebase.com, a free and open database of over 24 million things, including movies, books, TV shows, celebrities, locations, companies, and more
- Google search data (used to measure the popularity of a subject and help decide what information people most want to see)
There are two other official sources of information regarding the Knowledge Graph that I am aware of:
http://googleblog.blogspot.com/2012/05/introducing-knowledge-graph-things-not.html
http://www.google.com/insidesearch/features/search/knowledge.html
Both are more promotional based information sources and lack any details about the mechanics or sources involved.
I presume you have tried searching google for your client's name + date of birth and similar queries?
There is a (longshot) chance the Report a Problem feature can be used by you to share the problem your client is facing and ask the information be removed. I have a difficult time thinking Google is willing to remove accurate information.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Crawling Issues! How Can I Get Google to Crawl My Website Regularly?
Hi Everyone! My website is not being crawled regularly by Google - there are weeks when it's regular but for the past month or so it does not get crawled for seven to eight days. There are some specific pages, that I want to get ranked but they of late are not being crawled AT ALL unless I use the 'Fetch As Google' tool! That's not normal, right? I have checked and re-checked the on-page metrics for these pages (and the website as a whole, backlinking is a regular and ongoing process as well! Sitemap is in place too! Resubmitted it once too! This issue is detrimental to website traffic and rankings! Would really appreciate insights from you guys! Thanks a lot!
Technical SEO | | farhanm1 -
When will all of Google Maps be the same again?
As many of you are aware that the pigeon update was only applied to the new Google maps resulting in very different search results for Google local business. When you search for a business on old Google maps then you get totally different results vs the new Google maps. Some businesses totally disappeared completely from the search results. I have done my research and found out that it's because the new Algo was only applied to the new maps. Also new algo does not apply to other countries. Well the reason I posted this topic is because I have noticed that all the new Google Business listings I am verifying for my clients are all being put under the old Google maps and not the new ones. They come up fine when searching from old maps but not the new ones. I understand Google has not rolled out the pigeon on all data centers but why? Will Google eventually roll out the update to old maps? Since Google is adding businesses to old google maps then what's the point of even adding new listings?
Technical SEO | | bajaseo0 -
How Google sees my page
When looking for crawlability issues, what is the difference between using webmaster tools Fetch as google, looking at the cached pages in google index site:mypage.com, or using spider simulator tools.
Technical SEO | | shashivzw0 -
Canonical Expert question!
Hello, I am looking for some help here with an estate agent property web site. I recently finished the MoZ crawling report and noticed that MoZ sees some pages as duplicate, mainly from pages which list properties as page 1,2,3 etc. Here is an example: http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=2
Technical SEO | | artdivision
http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=3 etc etc Now I know that the best practise says I should set a canonical url to this page:
http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=all but here is where my problem is. http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=1 contains good written content (around 750 words) before the listed properties are displayed while the "page=all" page do not have that content, only the properties listed. Also http://www.xxxxxxxxx.com/property-for-rent/london/houses?page=1 is similar with the originally designed landing page http://www.xxxxxxxxx.com/property-for-rent/london/houses I would like yoru advise as to what is the best way to can url this and sort the problem. My original thoughts were to can=url to this page http://www.xxxxxxxxx.com/property-for-rent/london/houses instead of the "page=all" version but your opinion will be highly appreciated.0 -
Paid Links - How does Google classify them?
Greetings All, I have a question regarding "Paid Links." My company creates custom websites for other small businesses across the country. We always have backlinks to our primary website from our "Dealer Sites." Would Google and other search engines consider links from our "dealer sites" to be "paid links?" Example:
Technical SEO | | CFSSEO
http://www.atlanticautoinc.com/ is the "dealer site." Would Google consider the links from Atlantic Auto to be a "paid link," and therefor have less of an impact for page rankings, due to it not being organic? Any insight on this matter would be greatly appreciated. Thank you!!!0 -
Site removed from Google Index
Hi mozers, Two months ago we published http://aquacion.com We registered it in the Google Webmaster tools and after a few day the website was in the index no problem. But now the webmaster tools tell us the URLs were manually removed. I've look everywhere in the webmaster tools in search for more clues but haven't found anything that would help me. I sent the acces to the client, who might have been stupid enough to remove his own site from the Google index, but now, even though I delete and add the sitemap again, the website won't show in Google SERPs. What's weird is that Google Webmaster Tools tells us all the page are indexed. I'm totally clueless here... Ps. : Added screenshots from Google Webmaster Tools. Update Turns out it was my mistake after all. When my client developped his website a few months ago, he published it, and I removed the website from the Google Index. When the website was finished I submited the sitemap, thinking it would void the removal request, but it don't. How to solve In webmaster tools, in the [Google Index => Remove URLs] page, you can reinclude pages there. tGib0
Technical SEO | | RichardPicard0 -
Google SERPs and NoIndex directives.
We have pages that have been added to robots.txt as url patterns in DisAllow. Also, we have the meta noindex tags on the pages themselves. But we are finding the pages in index. I don't think they are higher up in the rankings and they don't have any descriptions, any previews or any cached pages. Why does Google show these pages? Could it be due to internal or external linking?
Technical SEO | | gaganc0 -
Is this dangerous (a content question)
Hi I am building a new shop with unique products but I also want to offer tips and articles on the same topic as the products (fishing). I think if was to add the articles and advice one piece at a time it would look very empty and give little reason to come back very often. The plan, therefore, is to launch the site pulling articles from a number of article websites - with the site's permission. Obviously this would be 100% duplicate content but it would make the user experience much better and offer added value to my site as people are likely to keep returning even when not in the mood to purchase anything; it also offers the potential for people to email links to friends etc. note: over time we will be adding more unique content and slowly turning off the pulled articled. Anyway, from an seo point of view I know the duplicate content would harm the site but if I was to tell google not to index the directory and block it from even crawling the directory would it still know there is duplicate content on the site and apply the penalty to the non duplicate pages? I'm guessing no but always worth a second opinion. Thanks Carl
Technical SEO | | Grumpy_Carl0