Invited Speakers
Invited Speakers
Abstract. Deep language understanding involves mapping language to its intended meaning in context, using concepts and relations in an ontology that supports knowledge and reasoning. Currently, one would think there was consensus across the field of computational linguistics that deep understanding is not possible, and almost all current research in the field focuses on developing new machine learning techniques over large corpora. I will argue that while this research is producing significant results, and will continue to do so, it has also served to isolate the field from its original home within Artificial Intelligence. As a result, current natural language work is mostly divorced from work in reasoning, planning and acting. In this talk I will argue that, contrary to current thought, an effective level of deep understanding is a very viable research area. I will present examples of recent work to support these claims. Interestingly, while I argue that the current statistical paradigm is unlikely to achieve deep understanding, it is also the case that deep understanding will likely only be possible by exploiting the advances in statistical approaches.
This talk will be given through videoconferencing.
Short bio. James Allen is the John H Dessauer Professor of Computer Science at the University of Rochester, as well as Associate Director of the Institute for Human and Machine Cognition in Pensacola Florida. He received his Phd in Computer Science from the University of Toronto and was a recipient of the Presidential Young Investigator award from NSF in 1984. A Founding Fellow of the American Association for Artificial Intelligence (AAAI), he was editor-in-chief of the journal Computational Linguistics from 1983-1993. He has authored numerous research papers in the areas natural language understanding, knowledge representation and reasoning, and spoken dialogue systems.
Learning about Activities and Objects from Video
by Anthony G. Cohn (University of Leeds, UK)
Abstract. In this talk I will present ongoing work at Leeds on building models of video activity. I will present techniques, both supervised and unsupervised, for learning the spatio-temporal structure of tasks and events from video or other sensor data. In both cases, the representation will exploit qualititive spatio-temporal relations. A novel method for robustly transforming video data to qualitative relations will be presented. For supervised learning I will show how the supervisory burden can be reduced using what we term "deictic supervision", whilst in the unsupervised case I will present a method for learning the most likely interpretation of the training data. I will also show how objects can be "functionally categorised" according to their spatio-temporal behaviour and how the use of type information can help in the learning process, especially in the presence of noise. I will present results from several domains including a kitchen scenario and an aircraft apron.
Short bio. Tony Cohn holds a Personal Chair at the University of Leeds, where he is Professor of Automated Reasoning. He is presently Director of the Institute for Artificial Intelligence and Biological Systems. He leads a research group working on Knowledge Representation and Reasoning with a particular focus on qualitative spatial/spatio-temporal reasoning, the best known being the well cited Region Connection Calculus (RCC). His current research interests range from theoretical work on spatial calculi and spatial ontologies, to cognitive vision, modelling spatial information in the hippocampus, and detecting buried underground assets (e.g. utilities and archaeological residues) using a variety of geo-located sensors. He has been Chairman/President of SSAISB, ECCAI, KR inc, the IJCAI Board of Trustees and is presently Editor-in-Chief of the AAAI Press, Spatial Cognition and Computation, and the Artificial Intelligence journal. He was elected a founding Fellow of ECCAI, and is also a Fellow of AAAI, AISB, the BCS, and the IET.
AND/OR Search and Sampling over Probabilistic and Deterministic Graphical Models
by Rina Dechter (University of California, Irvine, USA)
Abstract. I will describe three classes of algorithms which have become central in Artificial Intelligence and general computer science. These algorithms involve search over graphical models such as constraint networks, Bayesian networks, Markov fields and influence diagrams, which are the most commonly used representational scheme in tasks such as design, scheduling, diagnosis, planning and decision making with applications in medical diagnosis, man-machine interface, electronic commerce, robot planning, troubleshooting, economical prediction, market analysis, natural language understanding and bio-informatics applications. It is known that all these tasks are computationally hard, and while substantial progress has been made in the last 2-3 decades pushing the computational boundaries far ahead, numerous real-life problems are still out of reach for current technology. Advance in exact or approximate methods is crucial, with potential impact across many computational disciplines. In this talk I will describe progress made in the last decade on developing search algorithms that can adapt to the problem's structure by recognizing and exploiting problem decomposability, equivalence and irrelevance, and through which we can 1) Efficiently and flexibly trade space for time, 2) Efficiently and flexibly trade time for accuracy, gradually transitioning into approximations, 3) Exploit deterministic relationships and 4) Trade online for offline computation. Specifically, I will present exact AND/OR search algorithms and approximate sampling scheme over mixed probabilistic and deterministic graphical models, and briefly describe the results of evaluating some of these algorithms in recent UAI solver evaluations events (UAI06, UAI08,UAI2010).
Short bio. Rina Dechter is a professor of Computer Science at the University of California, Irvine. She received her PhD in Computer Science at UCLA in 1985, an MS degree in Applied Mathematic from the Weizmann Institute and a B.S in Mathematics and Statistics from the Hebrew University, Jerusalem. Her research centers on computational aspects of automated reasoning and knowledge representation including search, constraint processing and probabilistic reasoning. Professor Dechter is an author of Constraint Processing published by Morgan Kaufmann, 2003, has authored over 100 research papers, and has served on the editorial boards of: Artificial Intelligence, the Constraint Journal, Journal of Artificial Intelligence Research and Logical Method in Computer Science (LMCS). She was awarded the Presidential Young investigator award in 1991, is a fellow of the American association of Artificial Intelligence since 1994, was a Radcliffe Fellowship 2005-2006 and received the 2007 Association of Constraint Programming (ACP) research excellence award.
The Model-based Approach to Autonomous Behavior: Prospects and Challenges
by Hector Geffner (Universitat Pompeu Fabra, Barcellona, Spain)
Abstract. One of the central problems faced by autonomous agents is the selection of the action to do next. In AI, three approaches have been used to address this problem: the programming-based approach, where the agent controller is hardwired, the learning-based approach, where the controller is learned from experience, and the model-based approach, where the controller is derived from a model. Planning in AI is best conceived as the model-based approach to the action selection problem. The models represent the initial situation, the actions, the sensors, and the goals. The main challenge in planning is computational, as all the models, whether accommodating feedback and uncertainty or not, are intractable. Thus planners must automatically recognize and exploit the structure of the given problems in order to scale up. In this talk, I will review the models considered in current planning research, the progress achieved in solving these models, and the ideas that have turned to be most useful. I will also discuss some of the problems that remain open, and the use of planners for behavior recognition as opposed to behavior generation.
Short bio. Hector Geffner got his Ph.D at UCLA in 1989 with a thesis that was winner of the 1990 ACM Dissertation Award. He then worked as Staff Research Member at the IBM T.J. Watson Research Center in NY, USA and at the Universidad Simon Bolivar, in Caracas, Venezuela. He is currently a researcher at ICREA and a professor at the Universitat Pompeu Fabra. He is a fellow of AAAI and ECCAI, serves as Associate Editor of the Journal of Artificial Intelligence Research and Editorial Board member of Artificial Intelligence, and is author of the book "Default Reasoning: Causal and Conditional Theories" published by MIT Press in 1992. In the last ten years Hector has worked in the area of planning, developing techniques for deriving agent behaviors effectively and automatically from various types of models. His work in AI, that combines mathematics and experiments, is motivated by his interest in understanding the power and flexibility of human cognition.
Ontology-based data management: principles and challenges
by Maurizio Lenzerini (Università La Sapienza, Roma, Italy)
Abstract. Ontology-based data management aims at accessing and using data by means of a conceptual representation of the domain of interest in the underlying information system. Although this new paradigm provides several interesting features, and many of them have been already proved effective in managing complex information systems, several important issues still constitute stimulating research challenges for the community of Knowledge Representation and Reasoning. In this talk I will provide an introduction to Ontology-based data management, by illustrating the main ideas and techniques for using an ontology to access the data layer of an information system. I will then discuss important issues that are still the subject of extensive investigations, including the need of inconsistency tolerant query answering methods, and the need of supporting update operations expressed over the ontology.
Short bio. Maurizio Lenzerini is a professor in Computer Science and Engineering at the Università di Roma La Sapienza, where he is currently leading a research group on Artificial Intelligence and Databases. His main research interests are in Knowledge Representation and Reasoning, Ontology languages, Semantic Data Integration, and Service Modeling. His recent work in AI is mainly oriented towards the use of Knowledge Representation and Automated Reasoning principles and techniques in Information System management, and in particular in information integration and in service composition. He has authored over 250 papers, and has served on the editorial boards of several international journals, including Journal of Artificial Intelligence Research, Information Systems, IEEE Transaction on Data and Knowledge Engineering, and Logical Method in Computer Science. He is currently the Chair of the PODS (Principles of Database Systems) Executive Committee, and he is a Fellow of ECCAI, and a Fellow of the ACM.