Information Technology Reference

In-Depth Information

DCG, but only the consistency of the regression loss is revealed. It is unknown

whether the conclusion can be extended to pairwise and listwise loss functions,

which are more popular than the pointwise regression loss.

•

Existing analyses are conducted with the setting of either document ranking or

subset ranking. However, as pointed out in the previous chapter, the setting of

two-layer ranking would make more sense. Unfortunately, there is still no work

on the consistency analysis for this setting.

According to the above discussions, there is still a long way to go along the

direction of statistical consistency for ranking.

Remark
Note that there is some other work highly related to statistical consistency

of ranking, although not directly on the topic. For example, in [
2
], the regret bound

between the Kendall's
τ
and the pairwise 0-1 loss is derived when the ranking result

is produced by the voting of the pairwise classification results. In [
1
], the results in

[
2
] are extended to the case where quick sort is used to produce the ranking result. In

[
11
], the above findings are further extended to the case where NDCG is used as the

true loss for ranking instead of the Kendall's
τ
. Given these theoretical results, if we

further consider the previous findings [
14
] on the consistency of widely used surro-

gate loss functions like the hinge, exponential, and logistic losses with respect to the

0-1 classification loss, we may also come to the conclusion that in certain condi-

tions the pairwise ranking algorithms like Ranking SVM, RankBoost, and RankNet

(which minimize the hinge, exponential, and logistic losses respectively) are con-

sistent with the pairwise 0-1 loss, and thus the Kendall's
τ
or (1

−

NDCG).

18.6 Exercises

18.1 List different true loss functions used in previous work, and compare their pros

and cons as the true loss for ranking.

18.2 Can you list the properties that the true loss for ranking should possess, and

designsuchatrueloss?

18.3 The definitions of most ranking measures have not considered the sampling

of documents. As a result, the extension of them to the setting of two-layer

ranking might be difficult. Please show how to extend ranking measures to

consider the sampling of documents.

References

1. Ailon, N., Mohri, M.: An efficient reduction from ranking to classification. In: Proceedings of

the 21st Annual Conference on Learning Theory (COLT 2008), pp. 87-98 (2008)

2. Balcan, M.F., Bansal, N., Beygelzimer, A., Coppersmith, D., Langford, J., Sorkin, G.B.: Ro-

bust reductions from ranking to classification. In: Proceedings of the 20th Annual Conference

on Learning Theory (COLT 2007), pp. 139-153 (2007)