TY - JOUR
T1 - Scaling-invariant maximum margin preference learning
AU - Montazery, Mojtaba
AU - Wilson, Nic
N1 - Publisher Copyright:
© 2020 The Authors
PY - 2021/1
Y1 - 2021/1
N2 - One natural way to express preferences over items is to represent them in the form of pairwise comparisons, from which a model is learned in order to predict further preferences. In this setting, if an item a is preferred to the item b, then it is natural to consider that the preference still holds after multiplying both vectors by a positive scalar (e.g., 2a≻2b). Such invariance to scaling is satisfied in maximum margin learning approaches for pairs of test vectors, but not for the preference input pairs, i.e., scaling the inputs in a different way could result in a different preference relation being learned. In addition to the scaling of preference inputs, maximum margin methods are also sensitive to the way used for normalizing (scaling) the features, which is an essential pre-processing phase for these methods. In this paper, we define and analyse more cautious preference relations that are invariant to the scaling of features, or preference inputs, or both simultaneously; this leads to computational methods for testing dominance with respect to the induced relations, and for generating optimal solutions (i.e., best items) among a set of alternatives. In our experiments, we compare the relations and their associated optimality sets based on their decisiveness, computation time and cardinality of the optimal set.
AB - One natural way to express preferences over items is to represent them in the form of pairwise comparisons, from which a model is learned in order to predict further preferences. In this setting, if an item a is preferred to the item b, then it is natural to consider that the preference still holds after multiplying both vectors by a positive scalar (e.g., 2a≻2b). Such invariance to scaling is satisfied in maximum margin learning approaches for pairs of test vectors, but not for the preference input pairs, i.e., scaling the inputs in a different way could result in a different preference relation being learned. In addition to the scaling of preference inputs, maximum margin methods are also sensitive to the way used for normalizing (scaling) the features, which is an essential pre-processing phase for these methods. In this paper, we define and analyse more cautious preference relations that are invariant to the scaling of features, or preference inputs, or both simultaneously; this leads to computational methods for testing dominance with respect to the induced relations, and for generating optimal solutions (i.e., best items) among a set of alternatives. In our experiments, we compare the relations and their associated optimality sets based on their decisiveness, computation time and cardinality of the optimal set.
KW - Preference inference
KW - Preference learning
UR - https://www.scopus.com/pages/publications/85094325234
U2 - 10.1016/j.ijar.2020.10.006
DO - 10.1016/j.ijar.2020.10.006
M3 - Article
AN - SCOPUS:85094325234
SN - 0888-613X
VL - 128
SP - 69
EP - 101
JO - International Journal of Approximate Reasoning
JF - International Journal of Approximate Reasoning
ER -