Sanjay Srivastava’s The Hardest Science, January 2, 2012 blog, *An editorial board discusses fMRI analysis and “false-positive psychology*,” gives a link to a (summarized, with names redacted) account of an email discussion among the *Psychological Science* editorial board. The discussion concerned the suggestion that Simmons et al’s recommendations, as well as recommendations from an article concerning Functional Magnetic Resonance Imaging (fMRI) research, be adopted as policy of that journal. Most of the comments about the Simmons et al recommendations said that it would be premature to adopt them without further careful discussion, or that some of them were too rigid to take into account what was appropriate for different situations, both of which seem sensible to me. But some of the comments bring up points that warrant further discussion. Here is one (probably more in later posts):

Discussant (6) said,

“I really do not need 1000 more words of terribly tedious text. Can we just put all these things in one convenient table (See an impromptu example below)?”

Well, if something can be put in a table, that would indeed be better than having to search through paragraphs to find a particular item. So I looked at the example provided. Here’s a snippet:

“Group analysis:

- Model used: mixed-effects
- Statistical thresholding: …”

Uh-oh — The table has missed some important information: “Mixed effects” covers a whole class of models. The author needs to include more information: Which factors are fixed, which are random? Is there any nesting, and if so what is nested in what? And *why* are these choices appropriate for the data collected and the question being studied? A table would probably not be adequate for the last question, in particular. (Note: This is not an isolated problem — I have often seen “mixed models” given as the “method of analysis” in methods sections of papers, with no mention of the information in the questions above.)

Continuing with the snippet from discussant (6)’s table:

“2. Statistical thresholding:

a. Adjustment for multiple comparisons employed: Gaussian Random Field theory

b. Threshold: Z > 2.3, p < 0.001″

Well, it’s good that adjusting for multiple comparisons is addressed — but just stating the method and thresholds once again neglects the reasoning: *Why* was Gaussian Field theory chosen as the method? (e.g., why not permutation or bootstrap methods — see Nichols and Hayasaka (2003) for discussion of advantages and disadvantages of each method). And* why* were the stated thresholds chosen?

These omissions of inclusion of *the details of choice of methods *and* the reasoning behind those choices* is also an inadequacy in Simmons et al’s recommendations: They do not address the level of discussion of methods that is necessary to replicate a study, let alone to evaluate the appropriateness of the methods.