In my January 9 blog “A Mixed Bag,” I said that one positive aspect of Simmons et al’s 2011 paper is that it has generated a lot of discussion, and said “More on this in a later post.” That will probably expand to two or three posts, at least. This is the first.
Andrew Gelman’s 16 February, 2012 blog “False-positive psychology,” has a number of comments. In particular, Gelman said,
“My main comment on Simmons et al. is that I’m not so happy with the framing in terms of “false positives”; to me, the problem is not so much with null effects but with uncertainty and variation.”
He’s got a good point. Here’s my own elaboration on it: It often seems that social scientists (and to a lesser, but still frequent, extent physical scientists) look at a hypothesis test as something definitive. One thing I’ve tried to stress in my website is that this is far from the case. (See especially the page Expecting Too much Uncertainty and the links from it.)
Simmons et al (and many other authors of research papers) discuss hypothesis tests, but do not even mention confidence intervals. I’ve sometimes heard the argument that, since standard errors are routinely given in papers, someone who wants a confidence interval can use the standard errors to construct it. This attitude neglects what I see as an important reason to include confidence intervals in the first place: they keep uncertainty upfront, so readers are less likely to neglect it. Just reporting results of a hypothesis test easily seduces readers to slip into a false sense of certainty.
Moreover, constructing appropriate confidence intervals when using the same data set to consider more than one question requires taking multiple inference (and its effect on power) into account – but Simmons et al appear disinclined to recommend accounting for multiple inference, arguing (p. 7) that it would introduce “additional ambiguity [that] may make things worse.” This sounds to me like avoidance of uncertainty or other relevant complexities.
Indeed, Simmons et al appear to be seeking simple “solutions” that ignore, minimize, or dismiss the inherent ambiguity and uncertainty in science, rather than being upfront about this uncertainty.
While I’m on the topic of uncertainty: One interesting recent discussion of the inherent uncertainty in science is Iain Johnston’s article “The chaos within: Exploring noise in cellular biology,” Significance 9 (4) 17 2012. Johnston discusses the “essential randomness of cellular systems,” including some important causes and effects of this randomness and recent efforts to describe it.