Deakin University
Browse

File(s) under permanent embargo

Identifying items for moderation in a peer assessment framework

Version 2 2024-06-04, 03:56
Version 1 2018-05-25, 09:51
journal contribution
posted on 2024-06-04, 03:56 authored by Simon JamesSimon James, Elicia LanhamElicia Lanham, Vicky MakVicky Mak, Lei PanLei Pan, Tim WilkinTim Wilkin, Guy Wood-BradleyGuy Wood-Bradley
Peer assessment can be considered in the framework of group decision making and hence take advantage of many of the proposed methods and evaluation processes. Despite the potential of peer assessment to greatly reduce the workload of educators, a key hurdle to its uptake is its perceived reliability, with there being the preconception that peers may not be as reliable or fair as ‘experts’. In this contribution, we consider approaches to moderation with the aim of increasing the accuracy of scores given while reducing the total workload of the subject experts (or lecturers in the university context). Firstly, we propose several indices, which, in combination can be used to estimate the reliability of peer markers. Secondly, we consider the consensus of scores received by peers. We hence approach the problem of reliability from two angles, and from these considerations can identify a subset of peers whose results should be flagged for moderation. We conduct some numerical experiments to investigate the potential for these techniques in the context of peer assessment with heterogeneous marking behaviors.

History

Journal

Knowledge-Based Systems

Volume

162

Pagination

211-219

Location

Amsterdam, The Netherlands

ISSN

0950-7051

eISSN

1872-7409

Language

English

Publication classification

C1 Refereed article in a scholarly journal

Copyright notice

2018, Elsevier B.V.

Publisher

ELSEVIER