ECIR2011 Tutorial: Risk Management in Information Retrieval

November 1st, 2010
Risk Management

Risk Management (image courtesy of

Jun Wang, University College London and Kevyn Collins-Thompson, Microsoft Research

[bibtex file=mypublications.bib key=ecirtutorial]

1. Course objectives

Risk modelling and management are a new concept in Information Retrieval (IR) modelling. The new way of  thinking has significantly departed from the classic information retrieval methodologies originated from the Probability Ranking Principle, the Robertson- Spärck Jones model (the resulting BM25 formula), and the statistical language models. The recent research  in risk-aware IR models has been made by taking an analogy with the financial risk management. It has been demonstrated that the idea about uncertainty and retrieval risks, as well as the resulting mathematical modelling tools, not only provide theoretical explanations of some empirical retrieval results (e.g., the need for diversification, the trade-off between MAP and MRR, and the justification for pseudo-relevance feedback), but also help us develop useful retrieval techniques such as risk-aware query expansion and optimal document ranking.

The tutorial is aimed at providing a comprehensive introduction of the emerging risk modelling and management techniques in information retrieval systems. While the basic theories (such as portfolio theory of IR and mean-variance analysis) and risk models of information retrieval are covered, the tutorial is also focused on the resulting practical algorithms of query expansion, relevance ranking, IR metric optimization as well as their performance evaluations. Practical IR applications such as Web search engines, multimedia retrieval, and collaborative filtering will also be covered, as well as discussion of new opportunities for future research and applications.

2. Its relevance to the information retrieval community

The theoretical research and mathematical modelling of IR systems are the key driver behind the development of the information retrieval (IR) field. The first theoretical study of information retrieval can be traced back to Maron and Kuhns’ Probabilistic Indexing paper in 1960, where the classic school of thinking about IR systems was established [1]. We call it individualism here as the goal is to come up with a relevance score for each of the documents, and to rank them individually with respect to those scores [2, 3]. The two different document-oriented and query oriented views on how to assign a probability of relevance of a document to a user need has resulted in two different types of practical models. The Robertson- Spärck Jones model (the resulting BM25 formula) takes the query-oriented view (or need-oriented view), assuming a given information need and choosing the query representation in order to select relevant documents against others [4], while in the document-oriented view, the language models aim to choose the appropriate document representation to match queries or judge its relevance [5, 6].

The individualism has drawbacks [7]. Recently, there has been a new type of theoretical research that uses a probabilistic framework to move beyond the Probability Ranking Principle and determine optimal ranking algorithms by considering the finial-ranking context [8, 9, 10] or regarding the returned documents as a whole [11]. Portfolio theory of information retrieval has emerged as a new theory in IR modelling and provided a sounded theoretical framework and more powerful mathematical tool for us to understand and analyze retrieval processes such as query expansion [12,13], document ranking [14, 15], and evaluation [16, 17]. Drawing an analogy with finance and financial risk modelling [18], it has been realized that ranking documents under uncertainty is not just about picking individual relevant documents (Individualism), but about choosing the right combination of relevant documents [14]. Here, we name this new information retrieval modelling methodology Portfolioism. This motivates the IR researchers to quantify a ranked list of documents on the basis of its expected overall relevance (mean) and its variance, where the latter serves as a measure of risk.

The risk modelling and portfolio theory of IR provide theoretical explanations of some empirical retrieval results such as the need for diversification, the trade-off between MAP and MRR, and the justification for pseudo-relevance feedback. They also guide us to develop many useful retrieval techniques. For example, risk modelling has been successfully applied in query expansion [12, 13], and mean-variance analysis has been adopted for evaluation [16, 17]. The latest development in risk modelling has been used to directly maximize the expectation of various standard metrics (average precision, mean reciprocal rank, discounted cumulative gain, etc) [19].

3. The tutorial content and how the tutorial will be structured

Theme 1 The classic risk modelling era (Individualism)– 30 minutes (It should be emphasized that many previous tutorials on information retrieval models were limited only to theme 1 (mostly 1 and 2). To have a complete and consistent view, we will quickly spend half hour on it and yet provide a rather fresh discussion and new insights from the novel risk-modeling viewpoint.)

  1. Probability Ranking Principle [1, 2, 3, 7]
  2. Classic IR models [4, 5]
  3. Handling Ranking Context [8, 9, 10]

Theme 2 The Modern risk modelling era (Portfolioism) - 120 minutes

  1. Ranking, IR metric optimization, and mean-variance analysis [14, 15, 19]
  2. Risk-aware Query expansion [12, 13]
  3. Evaluation [16, 17]
  4. Other applications (multimedia retrieval, collaborative filtering, etc.) [11, 24, 25]

Theme 3 Relation to other recent developments – 30 minutes

  1. Robust classification [26, 27]
  2. Diversification and Multi-armed Bandit Machine [20, 21, 22]

10. The relation to Quantum Probability Ranking Principle [23]

11. Future opportunities

4. Planned course materials

Slides, handouts, and demos.

5. Key references

[1]    M. E. Maron and J. L. Kuhns. On relevance, probabilistic indexing and information retrieval. J. ACM, 7(3), 1960.

[2]    W. S. Cooper. The inadequacy of probability of usefulness as a ranking criterion for retrieval system output. University of California, Berkeley, 1971.

[3]    S. E. Robertson and N. Belkin. Ranking in principle. Journal of Documentation, 1978.

[4]    S. E. Robertson and K. Spärck Jones. Relevance weighting of search terms. Journal of the American Society for Information Science, 27(3):129–46, 1976.

[5]    D. Hiemstra. Using language models for information retrieval. PhD thesis, Univ.Twente, 2001.

[6]    J. Lafferty and C. Zhai. Document language models, query models, and risk minimization for information retrieval. In SIGIR, 2001.

[7]    M. D. Gordon and P. Lenk. A utility theoretic examination of the probability ranking principle in information retrieval. JASIS, 42(10):703–714, 1991.

[8]    C. Zhai and J. D. Lafferty. A risk minimization framework for information retrieval. Inf. Process. Manage., 42(1):31–55, 2006.

[9]    H. Chen and D. R. Karger. Less is more: probabilistic models for retrieving fewer relevant documents. In SIGIR, 2006.

[10] J. Carbonell and J. Goldstein. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In SIGIR, 1998.

[11] Jun Wang. “Mean-variance analysis: A new document ranking theory in information retrieval. In Proc. of European Conference on Information Retrieval (ECIR 2009), 2009.

[12] K. Collins-Thompson. “Estimating robust query models with convex optimization”. Advances in Neural Information Processing Systems 21  (NIPS), 2008

[13] K. Collins-Thompson. “Reducing the risk of query expansion via robust constrained optimization”.  in CIKM, 2009

[14] Jun Wang and Jianhan Zhu. Portfolio theory of information retrieval. In SIGIR09, 2009.

[15] Jianhan Zhu, Jun Wang, Michael Taylor, and Ingemar Cox. Risky business: Modeling and exploiting uncertainty in information retrieval. In SIGIR09, 2009.

[16] K. Collins-Thompson. “Accounting for stability of retrieval algorithms using risk-reward curves”. Proceedings of SIGIR 2009 Workshop on the Future of Evaluation in Information Retrieval, Boston. pg. 27-28.

[17] J. Zhu, J. Wang, and V. Vinay, “Topic (Query) Selection for IR Evaluation,” in SIGIR09

[18] H. Markowitz. Portfolio selection. Journal of Finance, 1952.

[19] J. Wang and J. Zhu, “On Statistical Analysis and Optimization of Information Retrieval Effectiveness Metrics,” in SIGIR10 Full Paper, 2010

[20] A. Slivkins, F. Radlinski and S. Gollapudi, Learning Optimally Diverse Rankings over Large Document Collections, ICML 2010.

[21] Thorsten Joachims, Thomas Hofmann, Yisong Yue, Chun-Nam Yu, Predicting Structured Objects with Support Vector Machines, Communications of the ACM (CACM)

[22] C. L. A. Clarke, M. Kolla, G. V. Cormack, O. Vechtomova, A. Ashkan, S. B¨uttcher, and I. MacKinnon. Novelty and diversity in information retrieval evaluation. In SIGIR, 2008.

[23]  G. Zuccon, L. Azzopardi, Using the Quantum Probability Ranking Principle to Rank Interdependent Documents, in ECIR 2010

[24] Robin Aly, Aiden Doherty, Djoerd Hiemstra, Alan Smeaton. “Beyond Shot Retrieval: Searching for Broadcast News Items Using Language Models of Concepts’’. In ECIR, 2010

[25] Portfolio Theory of Multimedia Fusion, X.Y. Wang and M.S. Kankanhalli, ACM International Conference on Multimedia (ACMMM 2010), 2010.

[26] Lanckriet, G.R.G., El Ghaoui, L., Bhattacharyya, C., Jordan, M.I. (2002). A Robust Minimax Approach to Classification . Journal of Machine Learning Research, Vol. 3, pp. 555-582.

[27] Dredze, M., Crammer, K., and Pereira, F. 2008. Confidence-weighted linear classification. In Proceedings of the 25th international Conference on Machine Learning (Helsinki, Finland, July 05 – 09, 2008). ICML ’08, vol. 307. ACM, New York, NY, 264-271.

Comments are closed.