Abstract
One approach to preference learning, based on linear support vector machines, involves choosing a weight vector whose associated hyperplane has maximum margin with respect to an input set of preference vectors, and using this to compare feature vectors. However, as is well known, the result can be sensitive to how each feature is scaled, so that rescaling can lead to an essentially different vector. This gives rise to a set of possible weight vectors-which we call the rescale-optimal ones- considering all possible rescalings. From this set one can define a more cautious preference relation, in which one vector is preferred to another if it is preferred for all rescale-optimal weight vectors. In this paper, we analyse which vectors are rescaleoptimal, and when there is a unique rescale-optimal vector, and we consider how to compute the induced preference relation.
| Original language | English |
|---|---|
| Pages (from-to) | 2203-2209 |
| Number of pages | 7 |
| Journal | IJCAI International Joint Conference on Artificial Intelligence |
| Volume | 2016-January |
| Publication status | Published - 2016 |
| Event | 25th International Joint Conference on Artificial Intelligence, IJCAI 2016 - New York, United States Duration: 9 Jul 2016 → 15 Jul 2016 |
Fingerprint
Dive into the research topics of 'Preference inference through rescaling preference learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver