Deakin University

File(s) under permanent embargo

Power, effects, confidence, and significance: an investigation of statistical practices in nursing research

journal contribution
posted on 2014-05-01, 00:00 authored by Cadeyrn GaskinCadeyrn Gaskin, B Happell
Objectives: To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Design: Statistical review. Data sources: Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Review methods: Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. Results: The median power to detect small, medium, and large effect sizes was .40 (interquartile range [. IQR]. = .24-.71), .98 (IQR= .85-1.00), and 1.00 (IQR= 1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR= .26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. Conclusion: The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. © 2013 .



International journal of nursing studies






London, England







Publication classification

C Journal article, C1.1 Refereed article in a scholarly journal

Copyright notice

2014, Elsevier