If non-paying customers only get a 2 min snippet of a video, can my video length in sitemap.xml be the full length?
-
I am working on a website that all of its primary contents are videos. They have an assortment of free videos, but the majority or viewable only with a subscription to the site. If you don't have a subscription, you can see a 2 min video clip of the contents of the video. But all the videos can be anywhere from 10min to 1.5 hours. When I am auto-generating the sitemap.xml, can I put the full length of the videos for paying members in the XML in the video:duration property?
Or because publicly only 2 minutes is available (unless you pay for a membership) is that frowned upon?
-
Based on the below two facts, I would put the snippet length for your videos in your sitemap:
-
Google can crawl the videos and may know that the actual length doesn't match what you are saying it does.
-
Google has a published policy for newspapers that have paywalls and don't allow free full access at http://googlenewsblog.blogspot.com/2009/12/update-to-first-click-free.html Basically, "We will crawl, index and treat as "free" any preview pages - generally the headline and first few paragraphs of a story - that they make available to us. This means that our crawlers see the exact same content that will be shown for free to a user. Because the preview page is identical for both users and the crawlers, it's not cloaking."
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can my homepage have 2 meta descriptions?
Hi all, When googling our company, I see our main page pop up with 2 different meta descriptions, depending on the search query. The situation
Technical SEO | | NHA_DistanceLearning
The search query 'nha' (on google.nl) returns the main page with a meta description that looks like a random grab from the code by Google itself, starting with 'Ik volg een cursus bij de NHA...' The search query 'nha.nl' (on google.nl) returns the main page with the proper meta description, starting with 'Aanbieder van thuisstudies met onder meer MBO-opleidingen...'. So yeah, I'd like to have the main page only appear with the proper meta description, the latter one. We did have a redirect issue (duplicate homepages) a few weeks ago and programming fixed it. Could this have something to do with a redirect? I'd love to hear your thoughts. Thanks!0 -
Sitemaps: Good Image And Video Sitemap Generators
Hello, We are trying to update our sitemap. We have currently updated our XML and HTML sitemaps but would like to have an image and video sitemap also. Can anyone recommend a good image and video sitemap generator? Kind regards, | Deeana Radley Web Designer & SEO Assistant Phone: 01702 460047 Email: dee@solopress.com Google+: +DeeanaRadley Twitter: @DeeanaRadley |
Technical SEO | | SolopressPrint0 -
Can Anybody Understand This ?
Hey guyz,
Technical SEO | | atakala
These days I'm reading the paperwork from sergey brin and larry which is the first paper of Google.
And I dont get the Ranking part which is: "Google maintains much more information about web documents than typical search engines. Every hitlist includes position, font, and capitalization information. Additionally, we factor in hits from anchor text and the PageRank of the document. Combining all of this information into a rank is difficult. We designed our ranking function so that no particular factor can have too much influence. First, consider the simplest case -- a single word query. In order to rank a document with a single word query, Google looks at that document's hit list for that word. Google considers each hit to be one of several different types (title, anchor, URL, plain text large font, plain text small font, ...), each of which has its own type-weight. The type-weights make up a vector indexed by type. Google counts the number of hits of each type in the hit list. Then every count is converted into a count-weight. Count-weights increase linearly with counts at first but quickly taper off so that more than a certain count will not help. We take the dot product of the vector of count-weights with the vector of type-weights to compute an IR score for the document. Finally, the IR score is combined with PageRank to give a final rank to the document. For a multi-word search, the situation is more complicated. Now multiple hit lists must be scanned through at once so that hits occurring close together in a document are weighted higher than hits occurring far apart. The hits from the multiple hit lists are matched up so that nearby hits are matched together. For every matched set of hits, a proximity is computed. The proximity is based on how far apart the hits are in the document (or anchor) but is classified into 10 different value "bins" ranging from a phrase match to "not even close". Counts are computed not only for every type of hit but for every type and proximity. Every type and proximity pair has a type-prox-weight. The counts are converted into count-weights and we take the dot product of the count-weights and the type-prox-weights to compute an IR score. All of these numbers and matrices can all be displayed with the search results using a special debug mode. These displays have been very helpful in developing the ranking system. "0 -
Capitals URLs to Non Capitals...
Hi, I am working on a website which has capital urls and non capital urls which will be generating duplicate content, and I know it is better to use all lower case. The problem is that the page authority is better for the capital versions and I was wondering will it negatively impact the SEO of we 301 redirect the uppercase urls to the lowercase counterparts? Thanks.
Technical SEO | | J_Sinclair0 -
Sitemap and crawl impact
If I have two links in the sitemap (for example: page1.html and page2.html) but the web-site contains more pages (page1.html, page2.html and page3.html) is this a sign for Google to not to crawl other pages? I.e. Will Google index page3.html? Consider that any page can be accessed.
Technical SEO | | ditoroin0 -
New domain's Sitemap.xml file loaded to old domain - how does this effect SEO?
I have a client who recently changed their domain when they redesigned their site. The client wanted the old site to remain live for existing customers with links to the new domain. I guess as a workaround, the developer loaded the new domain's sitemap.xml file to the old domain. What SEO ramifications would this have if any on the primary (new) domain?
Technical SEO | | julesae0 -
Can I rely on just robots.txt
We have a test version of a clients web site on a separate server before it goes onto the live server. Some code from the test site has some how managed to get Google to index the test site which isn't great! Would simply adding a robots text file to the root of test simply blocking all be good enough or will i have to put the meta tags for no index and no follow etc on all pages on the test site also?
Technical SEO | | spiralsites0 -
Sitemaps
Hi, I have doubt using sitemaps My web page is a news we page and we have thousands of articles in every section. For example we have an area that is called technology We have articles since 1999!! So the question is how can Make googl robot index them? Months ago when you enter the section technology we used to have a paginator without limits, but we notice that this query consume a lot of CPU per user every time was clicked. So we decide to limit to 10 pages with 1 records. Now it works great BUT I can see in google webmaster tools that our index decreased dramatically The answer is very easy, the bot doesn't have a way to get older technoly news articles because we limit he query to 150 records total Well, the Questin is how can I fix this? Options: 1) leave the query without limits 2) create a new button " all tech news" with a different query without a limit but paginated with (for example) 200 records each page 3) Create a sitemap that contain all the tech articles Any idea? Really thanks.
Technical SEO | | informatica8100