How does one submit data to the Knowledge Graph?
-
I'm working with a very reputable open source civic data compiler who'd like to give their data to the knowledge graph for it to be used. Does anybody know where I should start with this?
Or, do you think it's possible to e-mail Google and ask to be included in the Knowledge Graph? The company that owns this compiler will likely have connections to them.
Thanks!
-
That's true. This compiler is very reputable as well, but it's in a not super popular niche, so bloggers aren't going to be writing about it a ton. It's reputable, but undiscovered.
From my understanding, Google would like all sorts of reputable information for the KG. The question is, with things that aren't as popular, how does one achieve this?
Thanks.
-
Wiki didn't call up Google and ask to be there, they earned it by proving their authority - links, citations, etc.
-
Right, but the Knowledge Graph already leverages authoritative sources, such as Wikipedia, so why can't this be one more source?
-
Not a chance. This works the same way as other organic rankings, strictly based on the algorithm.
-
Do you guys think it's possible to e-mail Google and ask to be included in the Knowledge Graph? The company that owns this compiler will likely have connections to them.
-
Months ago it was still possible to create entries in Freebase ( https://developers.google.com/freebase/ ) that could be reuse in Google Knowledge Graph. Now it's completely closed and you have to create webpages, perhaps with schema in order to be added to Knowledge Graph.
-
Hi,
There's no way to simply submit content to the knowledge graph. For content to achieve knowledge graph status, it has to have proven to be a reputable, trustworthy source of said info. This is usually accomplished by the same methods you would use for typical SEO efforts; prove your content's worth with social proof, attract high quality links, become a thought leader in your niche, etc.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
This one is complicated... canonicals, href lang tags and no index
Bear with me, this is complicated (I REALLY hope one of you comes along and says, no it isn't!) Scenario A client has multiple english pages, as they have a unique product offering in AUS, US, UK, NZ and also have a global site in english. Obviously there is a lot of duplicate content and they have the relevant href lang tags set-up to help Google untangle what should be ranked where. They also have rel-canonical on each page. I've set-up search console for each of the folder structures, i.e. en-us, en-gb, en-au and so on. They have an optimised page for one of their primary keywords, which ranks nowhere for this exact keyword, but this page DOES rank for 40 similar keywords. For the exact keyword, they rank 52nd, and frustratingly, it's the homepage that ranks. We know the correct page is ranking and is indexed because search console tells us so and we see the exact page appear in SERPs for the other 40 keywords. When I look at the en-us site in Search Console, it tells me that the home page is not being indexed, because a rel canonical tag is prioritising an alternative page (probably the global site) - however, the en-us homepage is showing up in rankings for a lot of their important keywords. The site has been live for 6 months and the optimised page for about 3 months. Questions 1. If search console is saying the homepage is not ranking, how is it showing up in SERPs?
Intermediate & Advanced SEO | | Algorhythm_jT
2. Why is the homepage ranking for this important keyword, when there is virtually no mention of the keyword versus the page that is almost perfect according to Moz's on-page grader?
3. Do you need href lang tags AND rel canonical on a page?
4. How long before a new page that is optimised for a keyword take to replace (and hopefully surpass) the homepage?
5. If the US is the most important market, should we guide Google to that fact using rel-canonical? Really appreciate your feedback, hivemind. Thanks0 -
Will google merge structured data from two pages if they have the same canonical?
Will google merge structured data from two pages if they have the same canonical? The crawler should be able to get to the tab through an ahref. The tab in question is "Cast & Crew." Thank you in advance for any insight! szmOmj8.jpg uM8qUfi.jpg
Intermediate & Advanced SEO | | catbur0 -
Can an "Event" in Structured Data For Google Be A Webinar?
I have a client who is has structured data for live business webinars. Google's documentation seems to talk more about music and tickets than this kind of thing. At the same time, we get an error in search console for "Name" and location, which they list as "webinar." Should I removed this failed structured data attempt or is there a way to fix it? Thanks!
Intermediate & Advanced SEO | | 945010 -
Content From One Domain Mysteriously Indexing Under a Different Domain's URL
I've pulled out all the stops and so far this seems like a very technical issue with either Googlebot or our servers. I highly encourage and appreciate responses from those with knowledge of technical SEO/website problems. First some background info: Three websites, http://www.americanmuscle.com, m.americanmuscle.com and http://www.extremeterrain.com as well as all of their sub-domains could potentially be involved. AmericanMuscle sells Mustang parts, Extremeterrain is Jeep-only. Sometime recently, Google has been crawling our americanmuscle.com pages and serving them in the SERPs under an extremeterrain sub-domain, services.extremeterrain.com. You can see for yourself below. Total # of services.extremeterrain.com pages in Google's index: http://screencast.com/t/Dvqhk1TqBtoK When you click the cached version of there supposed pages, you see an americanmuscle page (some desktop, some mobile, none of which exist on extremeterrain.com😞 http://screencast.com/t/FkUgz8NGfFe All of these links give you a 404 when clicked... Many of these pages I've checked have cached multiple times while still being a 404 link--googlebot apparently has re-crawled many times so this is not a one-time fluke. The services. sub-domain serves both AM and XT and lives on the same server as our m.americanmuscle website, but answer to different ports. services.extremeterrain is never used to feed AM data, so why Google is associating the two is a mystery to me. the mobile americanmuscle website is set to only respond on a different port than services. and only responds to AM mobile sub-domains, not googlebot or any other user-agent. Any ideas? As one could imagine this is not an ideal scenario for either website.
Intermediate & Advanced SEO | | andrewv0 -
Why would our server return a 301 status code when Googlebot visits from one IP, but a 200 from a different IP?
I have begun a daily process of analyzing a site's Web server log files and have noticed something that seems odd. There are several IP addresses from which Googlebot crawls that our server returns a 301 status code for every request, consistently, day after day. In nearly all cases, these are not URLs that should 301. When Googlebot visits from other IP addresses, the exact same pages are returned with a 200 status code. Is this normal? If so, why? If not, why not? I am concerned that our server returning an inaccurate status code is interfering with the site being effectively crawled as quickly and as often as it might be if this weren't happening. Thanks guys!
Intermediate & Advanced SEO | | danatanseo0 -
What to do when all products are one of a kind WYSIWYG and url's are continuously changing. Lots of 404's
Hey Guys, I'm working on a website with WYSIWYG one of a kind products and the url's are continuously changing. There are allot of duplicate page titles (56 currently) but that number is always changing too. Let me give you guys a little background on the website. The site sells different types of live coral. So there may be anywhere from 20 - 150 corals of the same species. Each coral is a unique size, color etc. When the coral gets sold the site owner trashes the product creating a new 404. Sometimes the url gets indexed, other times they don't since the corals get sold within hours/days. I was thinking of optimizing each product with a keyword and re-using the url by having the client update the picture and price but that still leaves allot more products than keywords. Here is an example of the corals with the same title http://austinaquafarms.com/product-category/acans/ Thanks for the help guys. I'm not really sure what to do.
Intermediate & Advanced SEO | | aronwp0 -
HTML5 one page website on-site SEO
Hey guys, If for example, I'm faced with a client who has a website similar to: http://www.symphonyonline.co.uk/ How should I proceed with the on-site optimization? Should I create new pages on the website? Should I create a blog for the site to increase my reach? Please give me your tips on how to proceed with this kind of website. Thanks.
Intermediate & Advanced SEO | | BruLee0 -
Looking for re-assurance on this one: Sitemap approach for multi-subdomains
Hi All: Just looking for a bit of "yeah it'll be fine" reassurance on this before we go ahead and implement: We've got a main accommodation listing website under www.* and a separate travel content site using a completely different platform on blog.* (same domain - diffn't sub-domain). We pull in snippets of content from blog.* > www.* using a feed and we have cross-links going both ways, e.g. links to find accommodation in blog articles and links to blog articles from accommodation listings. Look-and-feel wise they're fully integrated. The blog.* site is a tab under the main nav. What i'd like to do is get Google (and others) to view this whole thing as one site - and attribute any SEO benefit of content on blog.* pages to the www.* domain. Make sense? So, done a bit of reading - and here's what i've come up with: Seperate sitemaps for each, both located in the root of www site www.example.com/sitemap-www www.example.com/sitemap-blog robots.txt in root of www site to have single sitemap entry: sitemap : www.example.com/sitemap-www robots.txt in root of blog site to have single sitemap entry: sitemap: www.example.com/sitemap-blog Submit both sitemaps to Webmaster tools. Does this sound reasonable? Any better approaches? Anything I'm missing? All input appreciated!
Intermediate & Advanced SEO | | AABAB0