24 April 2014
Interactive Machine Learning for End-User Systems Building in Music Composition & Performance
Meeting Room 10, 2nd Floor, JLB
14:15pm - 15:45pm
I build, study, teach about, and perform with new human-computer interfaces for real-time digital music performance. Much of my research concerns the use of supervised learning as a tool for musicians, artists, and composers to build digital musical instruments and other real-time interactive systems. Through the use of training data, these algorithms offer composers and instrument builders a means to specify the relationship between low-level, human-generated control signals (such as the outputs of gesturally-manipulated sensor interfaces, or audio captured by a microphone) and the desired computer response (such as a change in the parameters driving computer-generated audio). The task of creating an interactive system can therefore be formulated not as a task of writing and debugging code, but rather one of designing and revising a set of training examples that implicitly encode a target function, and of choosing and tuning an algorithm to learn that function.
In this talk, I will provide a brief introduction to interactive computer music and the use of supervised learning in this field. I will show a live musical demo of the software that I have created to enable non-computer-scientists to interactively apply standard supervised learning algorithms to music and other real-time problem domains. This software, called the Wekinator, supports human interaction throughout the entire supervised learning process, including the generation of training data by real-time demonstration and the evaluation of trained models through hands-on application to real-time inputs.
Drawing on my work with users applying the Wekinator to real-world problems, I'll discuss how data-driven methods can enable more effective approaches to building interactive systems, through supporting rapid prototyping and an embodied approach to design, and through "training" users to become better machine learning practitioners. I'll also discuss some of the remaining challenges at the intersection of machine learning and human-computer interaction that must be addressed for end users to apply machine learning more efficiently and effectively, especially in interactive contexts.
Rebecca Fiebrink is a Lecturer in Graphics and Interaction at Goldsmiths, University of London. As both a computer scientist and a musician, she is interested in creating and studying new technologies for music composition and performance. Much of her current work focuses on applications of machine learning to music: for example, how can machine learning algorithms help people to create new digital musical instruments by supporting rapid prototyping and a more embodied approach to design? How can these algorithms support composers in creating real-time, interactive performances in which computers listen to or observe human performers, then respond in musically appropriate ways? She is interested both in how techniques from computer science can support new forms of music-making, and in how applications in music and other creative domains demand new computational techniques and bring new perspectives to how technology might be used and by whom.
Fiebrink is the developer of the Wekinator system for real-time interactive machine learning, and she frequently collaborates with composers and artists on digital media projects. She has worked extensively as a co-director, performer, and composer with the Princeton Laptop Orchestra, which performed at Carnegie Hall and has been featured in the New York Times, the Philadelphia Enquirer, and NPR's All Things Considered. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule, where she helped to build the #1 iTunes app "I am T-Pain." Recently, Rebecca has enjoyed performing as the principal flutist in the Timmins Symphony Orchestra, as the keyboardist in the University of Washington computer science rock band "The Parody Bits," and as a laptopist in the Princeton-based digital music ensemble, Sideband. She holds a PhD in Computer Science from Princeton University and a Master's in Music Technology from McGill University.
Save to your Calendar