Deakin University
Browse

File(s) under permanent embargo

Identifying items for moderation in a peer assessment framework

journal contribution
posted on 2018-12-01, 00:00 authored by Simon James, Elicia LanhamElicia Lanham, Vicky MakVicky Mak, Lei PanLei Pan, Tim WilkinTim Wilkin, Guy Wood-BradleyGuy Wood-Bradley
Peer assessment can be considered in the framework of group decision making and hence take advantage of many of the proposed methods and evaluation processes. Despite the potential of peer assessment to greatly reduce the workload of educators, a key hurdle to its uptake is its perceived reliability, with there being the preconception that peers may not be as reliable or fair as ‘experts’. In this contribution, we consider approaches to moderation with the aim of increasing the accuracy of scores given while reducing the total workload of the subject experts (or lecturers in the university context). Firstly, we propose several indices, which, in combination can be used to estimate the reliability of peer markers. Secondly, we consider the consensus of scores received by peers. We hence approach the problem of reliability from two angles, and from these considerations can identify a subset of peers whose results should be flagged for moderation. We conduct some numerical experiments to investigate the potential for these techniques in the context of peer assessment with heterogeneous marking behaviors.

History

Journal

Knowledge-based systems

Volume

162

Pagination

211 - 219

Publisher

Elsevier

Location

Amsterdam, The Netherlands

ISSN

0950-7051

Language

eng

Publication classification

C1 Refereed article in a scholarly journal

Copyright notice

2018, Elsevier B.V.

Usage metrics

    Research Publications

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC