Re: [videoblogging] videoblogging paper

Adrian Miles wrote:

> He's working with the people I originally worked with in Bergen where
> we developed 2 prototype SMIL systems. One was using Potemkin's
> Odessa Steps sequence. Every shot was given metadata about scale,
> direction, movement and so on. You could then search for (eg) close
> ups with left to right movement. The engine found all shots, and
> could then play them all for you in sequence, regardless of where
> they occurred in the original.
>
> The second used the entire length of John Ford's The Searchers. I
> added metadata about doors in the film. If there was a door then was
> the camera inside/outside, looking inside/outside. you then searched
> for, say, doors inside looking inside. Every shot that met the
> criteria was pulled out and formed a comic strip panel. selecting any
> of those showed you the shot in context (and we were going to add
> roll back features so you could nominate how much surrounding context
> you wanted, the default was, from memory 15 seconds). This could have
> strung them all together but that wasn't the point of this system. My
> proposal at the time was to use this to build a documentary engine.
>
> Good to see that Jon's continuing the work, this needs to be done.

Are you familiar with Mediastreams[1]? This was a system developed in
the early 90s that allows the kinds of things you mentioned above. For
my Master's project I am considering re-imagining it for a web-centric
context and how that might serve as a foundation for things like video
blogging. If you have any references for the systems you mentioned I'd
like to take a look at them.

[1] http://acg.media.mit.edu/people/golan/mediastreams/