Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
When to Use Schema vs. Facebook Open Graph?
-
I have a client who for regulatory reasons cannot engage in any social media: no Twitter, Facebook, or Google+ accounts. No social sharing buttons allowed on the site. The industry is medical devices.
We are in the process of redesigning their site, and would like to include structured markup wherever possible. For example, there are lots of schema types under MedicalEntity: http://schema.org/MedicalEntity
Given their lack of social media (and no plans to ever use it), does it make sense to incorporate OG tags at all? Or should we stick exclusively to the schemas documented on schema.org?
-
Serendipitous timing - this article was posted yesterday about using mark-up, and how Open Graph and Schema.org are used, and why to use both:
Facebook Open Graph serves its purpose well, but it doesn’t provide the detailed information search engines need to improve the user experience. A single web page may have many components, and it may talk about more than one thing. Even if you mark up your content for Facebook Open Graph, schema.org provides an additional way to provide more detail about particular entities on the page.
http://searchengineland.com/schema-org-7-things-for-seos-to-consider-post-hummingbird-172163
-
I personally would use both. They way that I look at it with the OG tags is that you are controlling the consistency of the brand across platforms that you do not officially support. This is very much in my mind the same thing as making a page display correctly in older version of IE.
-
OG and Schema can live in the wild together. They are both ways to show information around the entities which they describe.
IMDB is using both OG and Schema to mark up their data:
http://www.imdb.com/title/tt1392170/ -
Thanks, Craig. Do you know if any of the OG and schema tags would duplicate or conflict? I see a lot of documentation about using one or the other, but not how to use both harmoniously.
-
Thanks Keri, interesting example. While the GE Healthcare site is more commercial in intent, I like how they've treated the share functionality using the node icon. Subtle, yet shareable
-
I haven't checked in depth, the regulations are with the FDA and they aren't the most up-to-date with social media practices! No competitors are using OG yet, but their sites are also very under-optimized.
-
This may be way over-the-top, but have you checked if OG tags would violate the regulations at all, or if they could potentially be a violation down the road? Granted, even though I haven't read the regulations, I don't think it should...but it's just something I'd double-check. I could see a potential problem if the wording is ambiguous and a competitor wants to stir up trouble for you.
-
Given that other people may share those pages, I would incorporate both OG and Schema on the site.
-
Just because you can't share doesn't mean people aren't going to share it on FB. Just yesterday, I shared http://www3.gehealthcare.com/en/Products/Categories/Accessories_and_Supplies/Adventure_Series_for_CT/Pirate_Island on FB with my friends. I don't have formal experience in this area, but did want to point that out. There was an article on slate.com about the design of these, and I went looking for more information, and found that page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Schema Markup Validator vs. Rich Results Test
I am working on a schema markup project. When I test the schema code in the Schema Markup Validator, everything looks fine, no errors detected. However, when I test it in the Rich Results Test, a few errors come back.
Intermediate & Advanced SEO | | Collegis_Education
What is the difference between these two tests? Should I trust one over the other?1 -
Location Pages On Website vs Landing pages
We have been having a terrible time in the local search results for 20 + locations. I have Places set up and all, but we decided to create location pages on our sites for each location - brief description and content optimized for our main service. The path would be something like .com/location/example. One option that has came up in question is to create landing pages / "mini websites" that would probably be location-example.url.com. I believe that the latter option, mini sites for each location, would be a bad idea as those kinds of tactics were once spammy in the past. What are are your thoughts and and resources so I can convince my team on the best practice.
Intermediate & Advanced SEO | | KJ-Rodgers0 -
Should I use change of address when moving to subdomain
Hi guys So we had a domain that was only for one country, www.example.com 1 year later we decided to go to another country so we will have all the current website under a country subdomain like : ae.example.com we did a 301 redirect
Intermediate & Advanced SEO | | awrikat
Should I perform a change of address action from www.example.com to ae.example.com ? please help
Thanks0 -
Schema for a discount
Hi! I'm trying to implement schema for a discount and it doesn't seem to be working. Is this the correct code? NAME OF ORDER HERE are $DISCOUNT HERE What am I missing? Thanks!
Intermediate & Advanced SEO | | 199580 -
Location.href vs href?
I just got off a Google Hangout with John Mueller and was left a little confused about his response to my question. If I have an internal link in a div like widgetwill it have the same SEO impact as widget John said that as you are unable to attribute a nofollow in an onclick event it would be treated as a naked link and would not pass pagerank but still be crawled. Can anyone confirm that I understood it correctly? If so should all my links that have such an onclickevent also have an html ahref in the too? Such as widget Many times it is more useful for the customer to click on any area of a large div and not just the link to get to the destination intended? Clarification on this subject would be very useful, there is nothing easily found online to confirm this. Thanks
Intermediate & Advanced SEO | | gazzerman10 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Removing Content 301 vs 410 question
Hello, I was hoping to get the SEOmoz community’s advice on how to remove content most effectively from a large website. I just read a very thought-provoking thread in which Dr. Pete and Kerry22 answered a question about how to cut content in order to recover from Panda. (http://www.seomoz.org/q/panda-recovery-what-is-the-best-way-to-shrink-your-index-and-make-google-aware). Kerry22 mentioned a process in which 410s would be totally visible to googlebot so that it would easily recognize the removal of content. The conversation implied that it is not just important to remove the content, but also to give google the ability to recrawl that content to indeed confirm the content was removed (as opposed to just recrawling the site and not finding the content anywhere). This really made lots of sense to me and also struck a personal chord… Our website was hit by a later Panda refresh back in March 2012, and ever since then we have been aggressive about cutting content and doing what we can to improve user experience. When we cut pages, though, we used a different approach, doing all of the below steps:
Intermediate & Advanced SEO | | Eric_R
1. We cut the pages
2. We set up permanent 301 redirects for all of them immediately.
3. And at the same time, we would always remove from our site all links pointing to these pages (to make sure users didn’t stumble upon the removed pages. When we cut the content pages, we would either delete them or unpublish them, causing them to 404 or 401, but this is probably a moot point since we gave them 301 redirects every time anyway. We thought we could signal to Google that we removed the content while avoiding generating lots of errors that way… I see that this is basically the exact opposite of Dr. Pete's advice and opposite what Kerry22 used in order to get a recovery, and meanwhile here we are still trying to help our site recover. We've been feeling that our site should no longer be under the shadow of Panda. So here is what I'm wondering, and I'd be very appreciative of advice or answers for the following questions: 1. Is it possible that Google still thinks we have this content on our site, and we continue to suffer from Panda because of this?
Could there be a residual taint caused by the way we removed it, or is it all water under the bridge at this point because Google would have figured out we removed it (albeit not in a preferred way)? 2. If there’s a possibility our former cutting process has caused lasting issues and affected how Google sees us, what can we do now (if anything) to correct the damage we did? Thank you in advance for your help,
Eric1 -
Canonical & noindex? Use together
For duplicate pages created by the "print" function, seomoz says its better to use noindex (http://www.seomoz.org/blog/complete-guide-to-rel-canonical-how-to-and-why-not) and JohnMu says its better to use canonical http://www.google.com/support/forum/p/Webmasters/thread?tid=6c18b666a552585d&hl=en What do you think?
Intermediate & Advanced SEO | | nicole.healthline1