ACD3: Video Bubble Charts

Three guys from ac-d3 published some neat plugin for D3.js for integrating audiovisual content (AC) into data charts (D3). The videos are obtained from YouTube or Vimeo to illustrate the chart content. Presenting and playing multiple videos at once might make no real sense, but it is nice to see some moving images within a bubble chart. So one could use it as tool for remixing different videos into a collage or for more serious data analysis task regarding videos. I am still waiting to see more meaningful example. Unfortunately the browser struggles under the load of simultaneous playing videos. Without this performance problems AC-D3 would be candidate for the Video Learning Dashboard that I am developing at the moment.

The AC-D3 code is available at github.

HTML5 Video Zoom

I was woundering how it would be possible to zoom a HTML5 video since zooming is an almost ubiquitous feature in web shops. In particular zoom makes sense for videos, because visual details may otherwise stay hidden in the background. Especially high quality videos with high resolution (HD) are often downscaled in its size so that even regular objects like text become hardly readable. This is especially the case on small display devices like mobile phones.

Mediasite video player has a similar feature, but it is realized by simple CSS transition of an video that is displayed smaller then its size. One could say its a fake zoom.

Much better results can be achieved with the small library of panzoom. With a few lines of code zoom can be applied to the HTML video element.

However, you can take look at a demo that I have just separated from Vi-Two. Its all work in progress, but the source code is available at github for a few months now.

Screenshot of the video zoom feature of the framework vi-two. Beside that the playback speed is adjustable to the users needs.

Final submission of 9 interaction design patterns for video learning environments

Yesterday I finished my 10 month work on two papers for the European Conference of Patern Languages of Programs 2014, held at Irsee Monastry/Bavaria. When submitting conference papers the number of externally triggered revisions of your paper is lower or equal then two. Once the reviewer provides a feedback you can incorporate it in your paper. Seldom the editor needs to advice you to fulfil the authors guidelines.
At EuroPLoP you get guided by a shepherd that provides you feedback on four to five iterations before submitting the paper for the conference. That means hard work on the text.
But it comes even better when you get the chance to hear others discussing our paper in one of the writers workshops at the conference. Any defence of the authors points of view or background considerations is prevented in order to obtain the message that other readers obtain from your written work.

Maybe more interesting is the resulting content of the two papers I’ve submitted.

Interaction design patterns for interactive video players in video-based learning environments

This paper is about interaction design patterns that describe common solutions of reoccurring problems in the design and development of video-based learning environments. The patterns are organized in two layers. The first layer incorporates the micro interactivity in the video player itself. Any manipulation that effects the presentation within the video or intervening its playback is part of the micro-level of interactivity. Currently, 17 patterns have been identified for that layer. Five of them will be object of that article: Annotated Timeline, Classified Marks, Playback Speed, User Traces, and Visual Summary.
The second layer of the pattern language consists of 12 patterns that describe interactivity on a macro-level. Macro interactivity comprises all manipulations concerning one or more videos as a whole. That does not include the playback but the organisation and structure of the video learning environment.

Interaction design patterns for design and development of video learning environments

This paper is about interaction design patterns that describe common solutions of reoccurring problems in the design and development of video-based learning environments. The patterns are organized in two layers. The first layer incorporates the micro interactivity in the video player itself. Any manipulation that effects the presentation within the video or intervening its playback is part of the micro-level of interactivity. Currently, 17 patterns have been identified for that layer. Five of them will be object of that article: Annotated Timeline, Classified Marks, Playback Speed, User Traces, and Visual Summary.
The second layer of the pattern language consists of 12 patterns that describe interactivity on a macro-level. Macro interactivity comprises all manipulations concerning one or more videos as a whole. That does not include the playback but the organisation and structure of the video learning environment.

Unvideopedia: Warum Videos bei Wikipedia nicht ankommen

Es bemüht sich wieder einmal jemand darum, mehr Videos in die Wikipedia zu bekommen. Von den 4 Millionen Artikel ist nur ein Promill mit einem Video angereichert. Die Vorteile von dynamischen Medien stehen außer Frage. Auch an (creative commons) Videoressourcen fehlt es nicht mehr. Dennoch gibt es einige Schwierigkeiten mit der Video(-Co)-Produktion für Wikipedia-Artikel:

  • Sprache: viele Videos sind in anderen Sprachen vertont, als der Artikel, Untertitel sind unumgänglich.
  • Visual litracy: Es gibt vergleichsweise wenig Leute, die gute Filme machen können. Schreiben können dagegen viele.
  • Gelegenheit: es gibt viele Phänomene, die sich nicht ohne Weiteres und sofort vom Schreibtisch aus mit einer Kamera einfangen lassen (z.B. Eisvögel, Kuelap, Steinpilzwachstum).
  • Qualität und Korrektur: Während sich ein schlechter Sprachstil oder unvollkommene Inhalte in Texten leicht editieren lassen, offenbaren sich Videos als atomare Klötzer, die wenn dann nur vollständig neu produziert bzw. ausgetauscht werden können. Ob sich künftig vertonte HTML5-Slideshows oder Vektoranimationen durchsetzen (Litracy) bleibt abzuwarten.

Andrew Lih bemüht sich nun im Projekt Wiki Makes Video die Integration von Videos zu unterstützen und durch Gestaltungshilfen (Muster) zu vereinfachen. Auf der Seite finden sich einige praktische Tipps zu Kameraeinstellung und Schnitt, je nach dem welche Art von Objekt man filmen möchte. Bemerkenswert ist auch die etwas naive Liste mit Artikeln, die von einem Video profitieren könnten. Eigentlich ließe sich jeder Artikel, der von Tieren, Pflanzen, Gebäuden, Landstrichen und sonstigen dreidimensionalen, realen Gegenständen handelt, durch bewegte Bilder bereichern.

In der Vergangenheit gab es bereits mindestens eine ähnliche Initiativen von Kaltura, die jedoch leider im Sande verlaufen ist. Diesmal dürfte Lih jedoch einige Studenten der University of Southern California dafür begeistert verpflichtet haben, so dass zumindest ein Effekt zu verzeichnen seinw ird. Auf der diesjährigen WikiSym / OpenSym in Hong Kong ist Andrew Lih mit einem Beitrag „Video Co-creation in Collaborative Online Communities“ dabei.

Ich bin auch gespannt, wann die Liste der featured Videos [1,2] entwaffnet wird:

  1. Annie Oakley shooting glass balls, 1894.ogg
  2. Apache-killing-Iraq.avi.ogg
  3. Cub polar bear is nursing 2.OGG
  4. DuckandC1951.ogg
  5. Eichmann trial news story.ogg
  6. Goa 1955 invasion.ogg
  7. Moon transit of sun large.ogg
  8. Play fight of polar bears edit 1.avi.OGG
  9. Searching for bodies, Galveston 1900.ogg
  10. Tanks of WWI.ogg