17 December 2009
Automatic Multimodal Behavior Generation in Second Life
Meeting Room 10, 2nd Floor, JLB
12:30pm - 13:45pm
Werner Breitfuss - University of Toyko
1.) My own Research on Gesture and Gaze Generation from Text and its applications using Second Life. This will including the methods for language analysis, from simple part of speech and syntax tagging to splitting utterances into theme/rheme and action /object parts. How to produce the according behavior from that information using CBD, shallow parsing, dictionaries and a novel algorithm for pointing gestures. And finally how to animate and control the characters using MPML and Second Life .
2.) The second part will be about other research done in our Laboratory like our “Global Lab” project, where we developed the infrastructure for advanced communication and participatory science based on the 3D Internet. OpenAstroSim, an OpenSim based application for synchronous collaborative visualization of astrophysical phenomena. EML3D the, to our knowledge, the world’s first environment manipulation language for Second Life. EML3D is powerful, extensible, flexible, and can be used easily by non-programming experts. Emotion recognition from text, a novel syntactical rule-based approach to affect recognition from text. The key feature of the developed Affect Analysis Model is that it is designed to handle not only grammatically and syntactically correct textual input, but also informal messages written in abbreviated or expressive manner. And others if there is time.
I would like to keep the seminar as open as possible and encourage questions and discussions.
Werner Breitfuss is a Ph.D. Student in Ishizuka Laboratory at the University of Tokyo in the Department for Creative Informatics. He also works as a Research Assistant at the National Institute of Informatics together with Ass. Professor Helmut Prendinger. His research covers areas like virtual worlds, embodied virtual characters, behavior generation from text, natural language processing and multimodal dialogues.
He currently focuses on:
· Gaze and Gesture Generation from text
· Real-time Behavior Generation for Avatars in Second Life
· Multimodal Referring Expressions for embodied virtual agents
Werner Breitfuss has published papers in international journals and conferences and will finish his Ph. D next march.
Save to your Calendar