During the main conference, 28–30 April, we will have the following three invited speakers:
University of Edinburgh, UK
Simon King is the Director of the Centre for Speech Technology Research at the University of Edinburgh. His research interests include automatic speech recognition and speech synthesis.
This talk will be about text-to-speech synthesis, a challenging area which combines text processing and acoustic modelling. The talk will start with a short tutorial on the state of the art, focusing on recent advances in acoustic modelling that have been made at the intersection between speech synthesis and automatic speech recognition, by researchers from both fields. He will then describe a variety of open problems that will only be solved by advances in text processing. He will try to convince the audience that they should also be working on these problems, at the intersection between natural language processing and speech synthesis.
University of Hamburg, Germany
Ulrike von Luxburg is a professor for computer science (machine learning) at the University of Hamburg, Germany. Her research focus is the theoretical analysis of machine learning algorithms, in particular for unsupervised learning and graph algorithms.
In the context of supervised machine learning, it is straight forward to empirically evaluate and compare the results of different algorithms, and supervised machine learning is well-understood from a theoretical point of view. In the domain of unsupervised learning, the situation is much more obscure. In my talk, I will first discuss problems and issues that arise in the empirical evaluation of unsupervised learning algorithms. Second, I will discuss the nature of theoretical guarantees for unsupervised learning.
University of Illinois at Urbana-Champaign
Dan Roth is a Professor in the Department of Computer Science and the Beckman Institute at the University of Illinois at Urbana-Champaign and a University of Illinois Scholar.
Roth is a Fellow of the ACM, AAAI and ACL, for his contributions to Machine Learning and to Natural Language Processing. He has published broadly in machine learning, natural language processing, knowledge representation and reasoning, and learning theory, and has developed advanced machine learning based tools for natural language applications that are being used widely by the research community and commercially.
I will describe some of our research in developing learning and inference methods in pursue of natural language understanding. In particular, I will address what I view as some of the key challenges, including (i) learning models from natural interactions, without direct supervision, (ii) knowledge acquisition and the development of inference models capable of incorporating knowledge and reason, and (iii) scalability and adaptation-learning to accelerate inference during the life time of a learning system.
A lot of this work is done within the unified computational framework of Constrained Conditional Models (CCMs), an Integer Linear Programming formulation that augments statistically learned models with declarative constraints as a way to support learning and reasoning. Within this framework, I will discuss old and new results pertaining to learning and inference and how they are used to push forward our ability to understand natural language.