• Seminars & Events
  • Latest
  • Archive
  • RSS Feed

Back

10 September 2007

Multimodal Content Creation and Affective Interaction with Life-Like Characters

Location: Room M308, Maths and Computing
Time: 12:30 - 1:45
Speaker(s): Dr. Helmut Prendinger

In the talk, I will provide an overview of our recent research on the interaction of users with virtual interface agents, and the generation of multimodal content employing life-like agents. First, I will describe our efforts in designing agents that adapt their behavior to the user's affective state and visual attention, so-called "attentive" (or perceptive) agents. Here, we process and interpret bio-signals to recognize emotion, and eye movements to estimate (visual) interest. The same physiological measures were also used to evaluate the effectiveness of the animated agents. Second, I will briefly introduce the Multimodal Presentation Markup Language 3D (MPML3D), which allows non-expert content authors to create attractive and highly dynamic interactive multimodal content easily. In the third part of my talk, I will turn to our research on analyzing text for the purpose of computer-mediated interaction and content creation. Specifically, we aim at "sensing" emotion (and sentiment) from text to enhance, e.g., instant messaging, and to automatically suggest emotional behavior for the character agents. Our latest interest is to automatically convert text (monologue) into a multimodal dialogue between two agents. We will argue that the "text to dialogue" transformation is a promising approach to professional content creation for non-experts.


Save to your Calendar

Contact: Digital Content and Media Sciences Research Division National Institute of Informatics (NII), Tokyo