Skip to main navigation Skip to search Skip to main content

Preference inference through rescaling preference learning

  • Nic Wilson
  • , Mojtaba Montazery

Research output: Contribution to journalArticlepeer-review

Abstract

One approach to preference learning, based on linear support vector machines, involves choosing a weight vector whose associated hyperplane has maximum margin with respect to an input set of preference vectors, and using this to compare feature vectors. However, as is well known, the result can be sensitive to how each feature is scaled, so that rescaling can lead to an essentially different vector. This gives rise to a set of possible weight vectors-which we call the rescale-optimal ones- considering all possible rescalings. From this set one can define a more cautious preference relation, in which one vector is preferred to another if it is preferred for all rescale-optimal weight vectors. In this paper, we analyse which vectors are rescaleoptimal, and when there is a unique rescale-optimal vector, and we consider how to compute the induced preference relation.

Original languageEnglish
Pages (from-to)2203-2209
Number of pages7
JournalIJCAI International Joint Conference on Artificial Intelligence
Volume2016-January
Publication statusPublished - 2016
Event25th International Joint Conference on Artificial Intelligence, IJCAI 2016 - New York, United States
Duration: 9 Jul 201615 Jul 2016

Fingerprint

Dive into the research topics of 'Preference inference through rescaling preference learning'. Together they form a unique fingerprint.

Cite this