Automated opinion detection: Implications of the level of agreement between human raters
journal contribution
posted on 2010-05-01, 00:00 authored by D Osman, John YearwoodJohn Yearwood, P VamplewThe ability to agree with the TREC Blog06 opinion assessments was measured for seven human assessors and compared with the submitted results of the Blog06 participants. The assessors achieved a fair level of agreement between their assessments, although the range between the assessors was large. It is recommended that multiple assessors are used to assess opinion data, or a pre-test of assessors is completed to remove the most dissenting assessors from a pool of assessors prior to the assessment process. The possibility of inconsistent assessments in a corpus also raises concerns about training data for an automated opinion detection system (AODS), so a further recommendation is that AODS training data be assembled from a variety of sources. This paper establishes an aspirational value for an AODS by determining the level of agreement achievable by human assessors when assessing the existence of an opinion on a given topic. Knowing the level of agreement amongst humans is important because it sets an upper bound on the expected performance of AODS. While the AODSs surveyed achieved satisfactory results, none achieved a result close to the upper bound. © 2009 Elsevier Ltd. All rights reserved.
History
Journal
Information Processing and ManagementVolume
46Pagination
331-342Publisher DOI
ISSN
0306-4573Language
enPublication classification
CN.1 Other journal articleIssue
3Publisher
Elsevier BVUsage metrics
Categories
Keywords
Licence
Exports
RefWorksRefWorks
BibTeXBibTeX
Ref. managerRef. manager
EndnoteEndnote
DataCiteDataCite
NLMNLM
DCDC