Learning to Rank using Query-Level Rules

Authors

  • Adriano Veloso
  • Marcos A. Gonçalves UFMG
  • Wagner Meira Jr. UFMG
  • Humberto Mossri

Keywords:

Competence, Ranking, Stability

Abstract

Most existing learning to rank methods neglect query-sensitive information while producing functions to
estimate the relevance of documents (i.e., all examples in the training data are treated indistinctly, no matter the query
associated with them). This is counter-intuitive, since the relevance of a document depends on the query context (i.e.,
the same document may have different relevances, depending on the query associated with it). In this paper we show
that query-sensitive information is of paramount importance for improving ranking performance. We present novel
learning to rank methods. These methods use rules associating document features to relevance levels as building blocks
to produce ranking functions. Such rules may have different scopes: global rules (which do not exploit query-sensitive
information) and query-level rules. Firstly, we discuss a basic method, RE-GR (Relevance Estimation using Global
Rules), which neglects any query-sensitive information, and uses global rules to produce a single ranking function.
Then, we propose methods that effectively exploit query-sensitive information in order to improve ranking performance.
The RE-SR method (Relevance Estimation using Stable Rules), produces a single ranking function using stable rules,
which are rules carrying (almost) the same information no matter the query context. The RE-QR method (Relevance
Estimation using Query-level Rules), is much finer-grained. It uses query-level rules to produce multiple query-level
functions. The estimates provided by such query-level functions are combined according to the competence of each
function (i.e., a measure of how close the estimate provided by a query-level function is to the true relevance of the
document). We conducted a systematic empirical evaluation using the LETOR 4.0 benchmark collections. We show that
the proposed methods outperform state-of-the-art learning to rank methods in most of the subsets, with gains ranging
from 2% to 9%. We further show that RE-SR and RE-QR, which use query-sensitive information while producing
ranking functions, achieve superior ranking performance when compared to RE-GR.

Downloads

Published

2010-09-14

Issue

Section

Regular Articles