On 06/07/2004, at 2:30 AM, Jay Dedman wrote:
> http://infodesign.no/diablog/index.php?p=190&more=1&c=1 (just text)
> everyone may have already seen this, but Jon Hoems wrote up an article
> he focuses on doing collective documentaries.
> gets into a lot of the deeplinking issues we've been discussing here.
good to see. He's working with the people I originally worked with in
Bergen where we developed 2 prototype SMIL systems. One was using
Potemkin's Odessa Steps sequence. Every shot was given metadata about
scale, direction, movement and so on. You could then search for (eg)
close ups with left to right movement. The engine found all shots, and
could then play them all for you in sequence, regardless of where they
occurred in the original.
The second used the entire length of John Ford's The Searchers. I added
metadata about doors in the film. If there was a door then was the
camera inside/outside, looking inside/outside. you then searched for,
say, doors inside looking inside. Every shot that met the criteria was
pulled out and formed a comic strip panel. selecting any of those
showed you the shot in context (and we were going to add roll back
features so you could nominate how much surrounding context you wanted,
the default was, from memory 15 seconds). This could have strung them
all together but that wasn't the point of this system. My proposal at
the time was to use this to build a documentary engine.
Good to see that Jon's continuing the work, this needs to be done.
hypertext.rmit || hypertext.rmit.edu.au/adrian
interactive networked video || hypertext.rmit.edu.au/vog
research blog || hypertext.rmit.edu.au/vog/vlog/