Lecture: Preference Elicitation and Preference Learning in Social Choice
Social choice has been the subject of intense investigation within computer science, AI, and operations research, in part because of the ease with which preference data from user populations can now be elicited, assessed, or estimated in online settings. In many domains, the preferences of a group of individuals must be aggregated to form a single consensus recommendation, placing us squarely in the realm of social choice.
I argue that the application of social choice models and voting schemes to domains like web search, product recommendation and social networks places new emphasis, in the design of preference aggregation schemes, on issues such as: articulating decision criteria suitable to the application at hand; approximate winner determination; incremental preference elicitation; learning methods for models of population preferences; and more nuanced analysis of the potential for manipulation.
In this talk, I'll provide an overview of some of these challenges and
outline some of our recent work tackling of them, including methods for:
learning probabilistic models of population preferences from choice
data; robust optimization (winner determination) in the presense of
incomplete user preferences; and incremental vote/preference elicitation
for group decision making. Each of these poses interesting modeling and
optimization challenges that are best tackled using a combination of
techniques from AI, operations research, and statistics.
Boutilier's has published over 170 refereed articles covering topics ranging from knowledge representation, belief revision, default reasoning, and philosophical logic, to probabilistic reasoning, decision making under uncertainty, multiagent systems, and machine learning. His current research efforts focus on various aspects of decision making under uncertainty: preference elicitation, mechanism design, game theory and multiagent decision processes, economic models, social choice, computational advertising, Markov decision processes and reinforcement learning.
Boutilier served as Program Chair for both the 16th Conference on Uncertainty in Artificial Intelligence (UAI-2000) and the 21st International Joint Conference on Artificial Intelligence (IJCAI-09). He is Associate Editor-in-Chief of the Journal of Artificial Intelligence Research (JAIR). Boutilier is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI). He has been awarded the Isaac Walton Killam Research Fellowship and an IBM Faculty Award. He also received the Killam Teaching Award from the University of British Columbia in 1997.
Lecture: Propagation in Constraints: How One Thing Leads To Another
At a conference such as CPAIOR, we have experts from many different approaches to searching huge combinatorial spaces. Much of what we all do is common, for example similar search methods, heuristics, and learning techniques. So what is it that is essentially different about Constraint Programming in particular? One answer is the power and diversity of constraint propagation algorithms. By contrast, other search disciplines often rely on just one propagation technique, such as unit propagation in SAT.
So to paraphrase the property expert, for the duration of this talk I'll say that the three most important aspects of Constraint Programming are "Propagation, Propagation, Propagation." I'll talk about all three. First, what Propagation is and the theory of how propagation algorithms work. Second, how Propagation forms the beating heart of a modern constraint solver, typically throwing away a lot of the theory in the process. And third, some aspects of Propagation in constraints being researched right now.
I hope to bring out some of the beauty and surprises in propagation. I also hope to show how ideas from one discipline have propagated to another, for example the introduction of watched literals from SAT into Constraint Programming. I also hope one propagation will lead to another as some of these lovely ideas propagate again.
Lecture: On Bilevel Programming and its Impact in Branching, Cutting and Complexity
Bilevel programming is a rich paradigm to express a variety of real-world applications including game theoretic and pricing ones. However, what we are interested in this talk is to discuss the bilevel nature of two of the most crucial ingredients of enumerative methods for solving combinatorial optimization problems, namely branching and cutting.
Specifically, we discuss a new branching method for 0-1 programs called interdiction branching [Lodi, Ralphs, Rossi, Smriglio 2011] that exploits the intrinsic bilevel nature of the problem of selecting a branching disjunction. The method is designed to overcome the difficulties encountered in solving problems for which branching on variables is inherently weak. Unlike traditional methods, selection of the disjunction in interdiction branching takes into account the best feasible solution found so far.
On the cutting plane side, we examine the nature of the so-called
separation problem, which is that of generating a valid inequality
violated by a given real vector, usually arising as the solution to a
relaxation of the original problem. We show that the problem of generating a
maximally violated valid inequality often has a natural interpretation as a
bilevel program [Lodi, Ralphs, Woeginger 2011]. In some cases, this bilevel program can be easily
reformulated as a single-level mathematical program, yielding a standard
mathematical programming formulation for the separation problem. In other
cases, no reformulation exists yielding surprisingly interesting examples
of problems arising in the complexity hierarchies introduced by [Jeroslow 1985].