Workshop on Preference Handling for Artificial Intelligence

Vancouver, British Columbia, July 22 in conjunction with AAAI 2007

Description

Preferences guide human decision making from early childhood (e.g. "which ice cream flavor do you prefer?") to complex professional and organizational decisions (e.g. "which investment funds to choose?"). Preferences are essential for making intelligent choices in complex situations, for mastering large sets of alternatives, and for coordinating a multitude of decisions. Explicit preference models allow an agent to reason about its own and the other agent's behavior and to analyze and revise this behavior. For these reasons, preference models have been necessary in many fields of Artificial Intelligence such as multi-agent systems, combinatorial auctions, diagnosis, design, configuration, planning, default reasoning, and many more. In addition to this, preference modeling and aggregation is central to decision theory, social choice and game theory --- three subfields of economics which are more and more cross-fertilizing with AI.

AI tasks often need new forms of preference handling beyond classic utility-based models. Recent work on preference handling in AI has consequently elaborated many new preference representation formalisms, as reflected by the publications at previous workshops on preference handling at AI conferences. Examples are logical preference representations, graphical models, and generalized forms of utility functions. AI also innovated reasoning about preferences and problem-solving algorithms based on preferences.

This workshop not only continues these innovations, but brings the results back to AI problems and explores the promise of preferences for AI challenges. It seeks to increase the scope of preference handling in AI and to attract researchers from all sub-fields of AI to discuss potential or existing AI applications of explicit preference models.