EWG on Preference Handling

Advances in Preference Handling

Multidisciplinary Working Group affiliated to EURO

Proposal of a Research Program: Study Machine Learning Methods from a Decision-theoretic Perspective (Draft 2018-01-23)

Research on preference handling is based on the assumption that preference models and preference representations are important for human-centered computing, i.e. for computational systems that assist a human to make decisions as well as for systems that make decisions like human agents in order to fulfill some task. Humans are rarely indifferent about decision outcomes, meaning that this assumption concerns every decision about which humans care, ranging from shopping recommendations up to financial, medical, contractual, and other decisions with far-reaching consequences. Those systems need to provide results that are readily accepted by humans and adapted to their needs and desiderata.

However, the currently successful approach for achieving such a high adaptability consists in numerical machine learning methods such as deep learning and other representation learning methods, which do not assume any particular model of the behavior to be learned, but rely on huge amounts of training data. Since 2014, these methods have gained an unprecedented attention in the general public and are considered a new panacea for building systems that achieve human-like behavior and human-like decision making for basic cognitive tasks. Those tasks include recognizing objects in images, speech recognition and generation, language translation, question-answering, and game playing. In principle, deep neural networks can be applied to any classification problem and are started to be used for making human-like decisions for the tasks listed in the beginning.

This success of deep learning methods in society challenges our assumptions of the importance of preference representations for human-centered computing. As such, it is legitimate to adopt a critical approach and to investigate whether representation learning can fulfill this promise in a way that is acceptable for the field of preference handling even if our assumptions are not adopted by those methods. We would therefore like to propose a research program that pursues such a critical investigation and poses the following kinds of questions. The purpose is to understand decision capabilities of deep neural networks and of the deep learning methods:

It should be noted that a system may exhibit an adaptive behavior that respects certain preferences even if it doesn’t have an explicit representation of preferences. Methods for revealing and eliciting preferences can be applied to black-box classifiers produced by deep learning methods in the same way as they can be applied to human decision makers. Hence, the theoretical tools for studying the decision-making behavior of deep learning systems already exist.

However, the fact that the behavior of a system respects a preference model does not imply that such a system has an explicit representation of preferences. Additional methods are needed to reveal an explicit representation of preferences. For example, it may be assumed that a system with an explicit representation of preferences can change its preferences with less effort than a system without such an explicit representation. Based on this assumption, the existence of explicit representations can be tested with suitably designed experiments.

This research program thus proposes a methodology to analyze, compare, and improve the decision-making behavior of deep-learning systems by using decision revision and preference revision as tests. On the one hand, this will give insights into the decision quality of those systems. On the other hand, the research program will give the preference handling community the opportunity to explore whether, when, and why explicit representations of preferences are necessary.