P-values and statistical significance: New ideas for interpreting scientific results


When statistician Nicole Lazar published an editorial in The American Statistician earlier this year advocating changes in the way scientists handle the troublesome issue of statistical significance, her father—who trained as a sociologist—asked her, "Are you getting death threats on Twitter?"

Lazar, a professor of statistics at the University of Georgia, doesn't use Twitter, but the question reveals how contentious the issue of statistical significance is. "You don't often think about statisticians getting emotional about things," Lazar told an audience of writers attending the Science Writers 2019 conference held in State College, Pa.,"but this is a topic that's been raising a lot of passion and discussion in our field.” Lazar spoke on Oct. 27 as part of the New Horizons in Science briefing organized by the Council for the Advancement of Science Writing (CASW).

Many scientists determine whether the results of their experiments are “statistically significant” by using statistical tests that result in a number known as the “p-value.” A p-value of less than 0.05 is commonly considered significant, and often erroneously characterized as meaning the findings are not likely to be the result of chance. What the number actually reveals is less straightforward, and even scientists have trouble explaining the precise meaning of the p-value. Using the threshold of < 0.05 has been shown to be problematic, misleading, and even dangerous. Lazar’s editorial, “Moving to a World Beyond 'p < 0.05',” discusses several possibilities that will give researchers alternatives to an arbitrary p-value cut-off.

Publisher's Version