External video content in iframe
-
Hi,
On our site we have a lot of video content. The player is hosted by a third party so we are using an iframe to include the content on our site.
The problem is that the content it self (on the third party domain) is shown in the google result.
My question is: Can we ask the third party to disallow the content from indexing in their robots.txt or will that also affect our own use of the video content? For example we use video-sitemaps to include the videos in Google video search (the sitemap links to the videos on our own domain, but we are still using iframes on the pages to collect the content from the third party domain that will then be blocked by robots.txt).
I hope you understand what I mean... Any suggestions?
Thanks a lot!
-
Hey,
Yes - this is a classic problem. It should be relatively easy to fix in the hosting platform settings.
What third party provider are you using?
-
Have you thought about actually hosting the video yourself? I have a few clients have started doing this now for a variety of reasons (control being one of them).
I'm not too sure I would be asking them to Robots the video out, but it does sound like you could do with more control over this and adding in some branding.
Could the video be re-worked so it has an introduction / splash screen? Could you add it to YouTube and create a channel there along with links back?
It sounds to me like you just need to take a look at what other possibilities exist in terms of directing people over to your site.
Andy
-
Ah, and regarding your last note: They will definitly fix the robots.txt if we ask them, I'm just not sure if it's a good idea or if it will mess up on our end as well
-
Hi Andy,
Yes, the video content is owned by us and there are two reasons why I want to remove the serp listing for the third party:
1. Since their listing is for their domain (even though it's a specific subdomain dedicated for us), it's not obvious for the user that the video belongs to us (and we are a very well known brand here in Sweden)
2. When the user clicks in the video listing for the third party, they land directly in the video player. There are no menues, logotypes or anything that actually tells the user that this is our content. And there is no possability for the user to click on a link to get to our website and make a purchase.And yes, sometimes the serp shows our domain and sometimes the third party domain and I don't actually know how to control this.
-
Just so I understand, are you wanting to ask for the third party to block the video so only your site appears when the video shows up in the search results?
- Does the video currently show up in the SERPs from your site?
- Is the video content in question owned by you?
At the end of the day, you can ask anyone anything you wish - you just don't know what the answers will be.
Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is the content on my website is garbage?
I received a mail from google webmasters, that my website is having low quality content. Website - nowwhatmoments.com
Technical SEO | | Green.landon0 -
Iframes, AJAX, JS, Etc.
Just started SEO on some legacy sites running JS navigation. Are there any proven ways to stop Google from parsing links and passing internal linkjuice? Ex: iframes, Ajax, JS, etc. Google is parsing some JS links on a couple of our legacy sites. The problem is that some pages are getting link juice and others aren't. It's also unpredictable which links are parsed and which aren't. The choice is rebuild the navigation (ouch), or figure out a way to block JS links entirely and build a simple text based secondary nav for link juice distribution. I definitely don't want to use nofollow. Any thoughts?
Technical SEO | | AMHC0 -
Huge number of indexed pages with no content
Hi, We have accidentally had Google indexed lots os our pages with no useful content at all on them. The site in question is a directory site, where we have tags and we have cities. Some cities have suppliers for almost all the tags, but there are lots of cities, where we have suppliers for only a handful of tags. The problem occured, when we created a page for each cities, where we list the tags as links. Unfortunately, our programmer listed all the tags, so not only the ones, where we have businesses, offering their services, but all of them! We have 3,142 cities and 542 tags. I guess, that you can imagine the problem this caused! Now I know, that Google might simply ignore these empty pages and not crawl them again, but when I check a city (city site:domain) with only 40 providers, I still have 1,050 pages indexed. (Yes, we have some issues between the 550 and the 1050 as well, but first things first:)) These pages might not be crawled again, but will be clicked, and bounces and the whole user experience in itself will be terrible. My idea is, that I might use meta noindex for all of these empty pages and perhaps also have a 301 redirect from all the empty category pages, directly to the main page of the given city. Can this work the way I imagine? Any better solution to cut this really bad nightmare short? Thank you in advance. Andras
Technical SEO | | Dilbak0 -
If two websites pull the same content from the same source in a CMS, does it count as duplicate content?
I have a client who wants to publish the same information about a hotel (summary, bullet list of amenities, roughly 200 words + images) to two different websites that they own. One is their main company website where the goal is booking, the other is a special program where that hotel is featured as an option for booking under this special promotion. Both websites are pulling the same content file from a centralized CMS, but they are different domains. My question is two fold: • To a search engine does this count as duplicate content? • If it does, is there a way to configure the publishing of this content to avoid SEO penalties (such as a feed of content to the microsite, etc.) or should the content be written uniquely from one site to the next? Any help you can offer would be greatly appreciated.
Technical SEO | | HeadwatersContent0 -
How critical is Duplicate content warnings?
Hi, So I have created my first campaign here and I have to say the tools, user interface and the on-page optimization, everything is useful and I am happy with SEOMOZ. However, the crawl report returned thousands of errors and most of them are duplicate content warnings. As we use Drupal as our CMS, the duplicate content is caused by Drupal's pagination problems. Let's say there is a page called "/top5list" , the crawler decided /top5list?page=1" to be duplicate of "/top5list". There is no real solution for pagination problems in Drupal (as far as I know). I don't have any warnings in Google's webmaster tools regarding this and my sitemap I submitted to Google doesn't include those problematic deep pages. (that are detected as duplicate content by SEOMOZ crawler) So my question is, should I be worried about the thousands of error messages in crawler diagnostics? any ideas appreciated
Technical SEO | | Gamer070 -
Is this dangerous (a content question)
Hi I am building a new shop with unique products but I also want to offer tips and articles on the same topic as the products (fishing). I think if was to add the articles and advice one piece at a time it would look very empty and give little reason to come back very often. The plan, therefore, is to launch the site pulling articles from a number of article websites - with the site's permission. Obviously this would be 100% duplicate content but it would make the user experience much better and offer added value to my site as people are likely to keep returning even when not in the mood to purchase anything; it also offers the potential for people to email links to friends etc. note: over time we will be adding more unique content and slowly turning off the pulled articled. Anyway, from an seo point of view I know the duplicate content would harm the site but if I was to tell google not to index the directory and block it from even crawling the directory would it still know there is duplicate content on the site and apply the penalty to the non duplicate pages? I'm guessing no but always worth a second opinion. Thanks Carl
Technical SEO | | Grumpy_Carl0 -
A lot of product pages with very similar content
I'm working with someone who's setting up an online jewelry store. The jewelry is available in many metal types, so we're creating filters to provide a good user experience in trying to narrow down their choice. Let's take an example of a wedding ring that's available these options: 10kt yellow gold
Technical SEO | | Leighm
10kt white gold
18kt yellow gold
18kt white gold
Palladium
Platinum These are all entered as separate products, so that they can be used in the filtering system. However, apart from some minor changes to the title and description most of the content will be identical, across these 6 product pages. Also, many wedding ring styles are going to be very similar, so we're going to have very similar descriptions for a lot of the rings. We're concerned about problems this might cause with the search engines in terms of duplicate content. There's 2 issues that I an see (there may be more!): They will not index many of the pages and we'll leak link juice to those pages that will never get indexed They do index all the variations, but the content is so similar, that we have different pages competing for essentially the same keywords Also, these products are likely to come and go, so investing heavily on creating really unique content for them isn't really sustainable, affordable. Any advise? Thanks,0 -
Canonical Link for Duplicate Content
A client of ours uses some unique keyword tracking for their landing pages where they append certain metrics in a query string, and pulls that information out dynamically to learn more about their traffic (kind of like Google's UTM tracking). Non-the-less these query strings are now being indexed as separate pages in Google and Yahoo and are being flagged as duplicate content/title tags by the SEOmoz tools. For example: Base Page: www.domain.com/page.html
Technical SEO | | kchandler
Tracking: www.domain.com/page.html?keyword=keyword#source=source Now both of these are being indexed even though it is only one page. So i suggested placing an canonical link tag in the header point back to the base page to start discrediting the tracking URLs: But this means that the base pages will be pointing to themselves as well, would that be an issue? Is their a better way to solve this issue without removing the query tracking all togther? Thanks - Kyle Chandler0