Content Reference Library

Last year, Viacom filed a lawsuit over $1bn against YouTube/Google. Viacom’s claim is that youTube illegally distributed to its users content owned by Viacom. The problem is that actually nobody knows what content is owned by whom. Even Viacom doesn’t know which content it actually owns.

Here is an example of what I mean. Viacom ordered an independent film maker, Joanna Davidovich, to remove from YouTube a film she made and that they thought was owned by them. The problem was that this film actually wasn’t owned by Viacom, they contacted her in error. A Viacom executive got in touch with her later to explain what happened.

Joanna wraps it up as follows:

I was personally contacted by an executive at Viacom, who explained how my film got mixed into their system. Juxtaposer was in a film festival that was presented by Nicktoons, which is of course a Viacom company. They offered selections of the festival as downloadable content, and Juxtaposer was one of them. They just forgot that Viacom’s rights to those films were all nonexclusive. He personally assured me that Viacom is no longer making a claim to my film and YouTube should be sending me documents affirming that shortly.I don’t think this would have been over with nearly as fast if not for the publicity I got from your post. This could have been a nightmare, but it wasn’t. Count this one a success!

Effectively what that means is that nobody knows with 100% certainty who owns what content. Not even Viacom. Now, Mike Arrington says the copyright law should get changed. I disagree. What should get changed is not the law as such, but how the law is actually implemented in practice. What we need is a reference library with referencing technology attached to it, where every distributor of electronic content material can automatically reference check the content they are distributing. When a distributor is not doing so, and they end up distributing copyrighted material illegally, they could be deemed to have not complied with the law.

For example, you could easily translate the sound in videos to written text and create an index of that text. You could subsequently query that index with text you discover in your video/music/text/etc sharing site. Content owners could publicly claim their content in that reference library (in certain geographies and for certain distribution channels).

Having such a reference library would actually indirectly increase the value of the overall ecosystem, as it would help all creators of content to claim (and offer for distribution and license) their content.

I am not sure whether any content company would want that reference library. Probably too much work. And it would only indirectly generate revenues. It is so much easier to sue Google than to try to get the job done right in the first place.

Maybe Umair should have a word with them. Maybe they can be made to understand that this would actually increase the value of their business. Sounds like quite a long shot.

PS: See the little (R) sign next to the Vicom logo? Bad bad me for including it in this post!

AddThis Social Bookmark Button Subscribe in a Reader Subscribe by Email

Zemanta Pixie
Advertisement

Is Facebook the Database of Un-intentions?

Facebook logoJohn Battelle once called Google the ‘Database of Intentions’. What he meant with that was that Google tracks the search terms that people type into its search engine. It catches what they are looking for on the web. Their intentions in other words, hence the ‘Database of Intentions’. It is precisely for this reason that Google works well for advertisers. Searchers have the intent of searching for something. They express their search by keyword and advertisers can bid for these keywords. Thus, advertisers get highly specific traffic directed to their websites at a fair price.

Let’s have a look at Facebook by comparison. As reported by TechCrunch, FlowingData ran an interesting article a few days ago, which showed the sort of applications that are available to Facebook users. Applications are provided by 3rd party developers and Facebook allows them to operate these applications on their platform. In a sense, the type of these applications gives a very interesting insight into the intent of Facebook users. Most interestingly, the vast majority of applications are classified as ‘just for fun’, followed by gaming. Those familiar with Facebook will understand what ‘just for fun’ means. These are all the vampire kisses, hugs, pokes and so forth. I can assure than when you get ‘bitten by a vampire’, there is no serious intent involved.

Today, I read an article by Bob Gilbreath. Bob is a marketing executive who reported on his experience of using Facebook as an advertising platform. Bob’s conclusion is damning. His results for both CPM (cost per impression) and CPC (cost per click) are below industry average, both for targeted’ groups within Facebook and for Facebook as a whole. You can read the whole and well written analysis on his blog. To sum it up: advertising on Facebook in his experience was worse than on any other normal website. Facebook is less effective than the industry average. This impression seems to be mirrored by others whom he refers to, including Chris Anderson, Fred Wilson and Nick Denton.

Wow. Worse than average? How is that possible? Isn’t the theory that social networks are supposed to be highly specific and effective in terms of the kind of traffic that they can send to advertisers?

I still believe that to be true. So maybe this was a Facebook specific problem? Thinking about this, it suddenly hit me. Facebook is the place where people go without any specific intent in mind. This is shown clearly by the kind of ‘just for fun’ applications on facebbok. Facebook users simply go to fool around, ‘just for fun’. Thinking back to Google’s, as John Battelle expressed it, ‘Database of Intentions’, maybe Facebook is the opposite of Google. Maybe, Facebook is the Database of Un-intentions.

If this rationale holds water, then this must make Facebook’s traffic the opposite kind of traffic to Google’s traffic. Given that Google’s traffic is the most valuable of the web, this would make Facebook’s traffic the least valauable. This could explain why advertisers seem to get such a low return on their money at Facebook.
AddThis Social Bookmark Button Subscribe in a Reader Subscribe by Email