A Computational Model of Non-Cooperation in Natural Language Dialogue
This PhD research project focuses on the analysis and modelling of non-cooperation in dialogue.
Conversation is usually understood as a collaborative task, in which two or more participants work together in order to achieve a certain goal. Consequently, most theories of dialogue assume full cooperation between the dialogue participants. Concepts like joint actions, shared plans or dialogue games all belong in this tradition, and a series of rather successful research dialogue systems have been implemented based on these models. They all have in common the assumption that participants agree on what they want to achieve with the conversation and that their joint efforts go in that direction. For instance, an example interaction with CMU’s Let’s Go dialogue system.
However, real-life conversations go seldom so smoothly. A great many situations escape the assumptions above, in which case existing models have little to say. This research aims at narrowing this gap by looking carefully at the “odd” cases: those in which dialogue participants depart from the norm and act selfishly in pursuit of their individual goals.
The project has two main tracks: performing empirical investigations on the nature of non-cooperative conversational behaviour in naturally-occurring dialogue, and devising an adequate computational model of dialogue management for implementing conversational agents that are able to exhibit such behaviour.
For the empirical analysis I use broadcast political interviews as a source of data. The particular nature of these exchanges, and the usually conflicting goals interviewers and politicians bring to them, provide plenty of interesting situations to work with.
Click here for publications.