A focus on novel, confirmatory, and statistically significant results by journals that publish experimental audit research may result in substantial bias in the literature. We explore one type of bias known as p-hacking: a practice where researchers, whether knowingly or unknowingly, adjust their collection, analysis, and reporting of data and results, until nonsignificant results become significant. Examining experimental audit literature published in eight accounting and audit journals within the last three decades, we find an overabundance of p-values at or just below the conventional thresholds for statistical significance. The finding of too many ‘‘just significant’’ results is an indication that some of the results published in the experimental audit literature are potentially a consequence of p-hacking. We discuss potential remedies that, if adopted, may to some extent alleviate concerns regarding p-hacking and the publication of false positive results.
History
Journal
Behavioral research in accounting
Volume
31
Season
Spring
Pagination
119-131
Location
Lakewood Ranch, Fla.
ISSN
1050-4753
eISSN
1558-8009
Language
eng
Publication classification
C1.1 Refereed article in a scholarly journal, C Journal article