How do I get my statistics right? Solid advice to younger cognitive neuroscientists

Beginning (and also more advanced) cognitive neuroscientists often face the problem of neuroimaging being highly costly (up to even some thousands of dollars/euros per subject) and thus the number of subjects that one can measure end up being rather modest, typically from a few to few tens. Furthermore, modern neuroimaging methods tend to produce a wealth of data per subject, and the poor neuroscientist quickly runs into the problem of having to decide whether and how to conduct corrections for multiple statistical comparisons. Adding to confusion, a cognitive neuroscientist can easily find functional magnetic resonance imaging studies published in notable journals such as Science with as few as eight or even five subjects, and yet run into arguments that his/her "too few" 15 or 20 subjects is a severe problem that results in rejection of the study from a lesser journal.

In his paper that is written with an unusual but highly entertaining ironic tone, Karl Friston (2012) presents the most common lines of critic on statistical analysis of neuroimaging data, provides advice and insights on how to design one's experiment so that it is statistically sound, and how to counter argue the most typical erroneous critic by reviewers. I see this as highly important given the abundance of misunderstandings on statistics that one often runs into when trying to publish findings in the area of neuroimaging (and also other fields of science).

It is important to note that none of the hard work and exciting findings that one obtains really contribute to scientific knowledge until the results have been published in one of the scientific journals. The peer-review process is inarguably the cornerstone of science, and good reviews often help improve the scientific quality of one's publications, but on the other hand delays in publication or rejection of a manuscript (that results in even larger delay in getting published) if based on misunderstanding of statistics especially on part of an expert reviewer, is a highly unfortunate outcome that slows down progress of science.

Reference: Friston K. Ten ironic rules for non-statistical reviewers. NeuroImage (2012) 61: 1300–1310. http://dx.doi.org/10.1016/j.neuroimage.2012.04.018

No comments:

Post a Comment

Any thoughts on the topic of this blog? You are most welcome to comment, for example, point to additional relevant information and literature on the topic. All comments are checked prior to publication on this site.