You are not logged in.

Modelling human preferences for ranking and collaborative filtering: a probabilistic ordered partition approach

Tran, Truyen, Phung, Dinh and Venkatesh, Svetha 2016, Modelling human preferences for ranking and collaborative filtering: a probabilistic ordered partition approach, Knowledge and information systems, vol. 47, no. 1, pp. 157-188, doi: 10.1007/s10115-015-0840-9.

Attached Files
Name Description MIMEType Size Downloads

Title Modelling human preferences for ranking and collaborative filtering: a probabilistic ordered partition approach
Author(s) Tran, TruyenORCID iD for Tran, Truyen orcid.org/0000-0001-6531-8907
Phung, DinhORCID iD for Phung, Dinh orcid.org/0000-0002-9977-8247
Venkatesh, SvethaORCID iD for Venkatesh, Svetha orcid.org/0000-0001-8675-6631
Journal name Knowledge and information systems
Volume number 47
Issue number 1
Start page 157
End page 188
Total pages 32
Publisher Springer
Place of publication New York, N.Y.
Publication date 2016-04
ISSN 0219-1377
0219-3116
Keyword(s) preference learning
learning-to-rank
collaborative filtering
probabilistic ordered partition model
set-based ranking
probabilistic reasoning
Summary Learning preference models from human generated data is an important task in modern information processing systems. Its popular setting consists of simple input ratings, assigned with numerical values to indicate their relevancy with respect to a specific query. Since ratings are often specified within a small range, several objects may have the same ratings, thus creating ties among objects for a given query. Dealing with this phenomena presents a general problem of modelling preferences in the presence of ties and being query-specific. To this end, we present in this paper a novel approach by constructing probabilistic models directly on the collection of objects exploiting the combinatorial structure induced by the ties among them. The proposed probabilistic setting allows exploration of a super-exponential combinatorial state-space with unknown numbers of partitions and unknown order among them. Learning and inference in such a large state-space are challenging, and yet we present in this paper efficient algorithms to perform these tasks. Our approach exploits discrete choice theory, imposing generative process such that the finite set of objects is partitioned into subsets in a stagewise procedure, and thus reducing the state-space at each stage significantly. Efficient Markov chain Monte Carlo algorithms are then presented for the proposed models. We demonstrate that the model can potentially be trained in a large-scale setting of hundreds of thousands objects using an ordinary computer. In fact, in some special cases with appropriate model specification, our models can be learned in linear time. We evaluate the models on two application areas: (i) document ranking with the data from the Yahoo! challenge and (ii) collaborative filtering with movie data. We demonstrate that the models are competitive against state-of-the-arts.
Language eng
DOI 10.1007/s10115-015-0840-9
Field of Research 080109 Pattern Recognition and Data Mining
0801 Artificial Intelligence And Image Processing
Socio Economic Objective 970108 Expanding Knowledge in the Information and Computing Sciences
HERDC Research category C1 Refereed article in a scholarly journal
ERA Research output type C Journal article
Copyright notice ©2016, Springer
Persistent URL http://hdl.handle.net/10536/DRO/DU:30076872

Connect to link resolver
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 1 times in TR Web of Science
Scopus Citation Count Cited 1 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 160 Abstract Views, 2 File Downloads  -  Detailed Statistics
Created: Mon, 07 Mar 2016, 17:49:34 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.