File(s) under permanent embargo
Evaluating non-deterministic retrieval systems
conference contributionposted on 2014-01-01, 00:00 authored by G K Jayasinghe, W Webber, M Sanderson, Lasitha DharmasenaLasitha Dharmasena, J S Culpepper
The use of sampling, randomized algorithms, or training based on the unpredictable inputs of users in Information Retrieval often leads to non-deterministic outputs. Evaluating the effectiveness of systems incorporating these methods can be challenging since each run may produce different effectiveness scores. Current IR evaluation techniques do not address this problem. Using the context of distributed information retrieval as a case study for our investigation, we propose a solution based on multivariate linear modeling. We show that the approach provides a consistent and reliable method to compare the effectiveness of non-deterministic IR algorithms, and explain how statistics can safely be used to show that two IR algorithms have equivalent effectiveness. Copyright 2014 ACM.
EventACM SIGIR Research and Development in Information Retrieval Conference (37th : 2014 : Gold Coast, Qld.)
Pagination911 - 914
LocationGold Coast, Qld.
Place of publicationNew York, NY
Publication classificationE3 Extract of paper
Copyright notice2014, Association for Computing Machinery
Title of proceedingsProceedings of the 37th International ACM SIGIR Conference on Research and Development in Information Retrieval; SIGIR 2014
CategoriesNo categories selected