Tuesday, November 13, 2012

Something Fishy...


If you're a researcher that frequently utilizes fMRI analysis it's likely that some interested layman has sent you something similar to this in the last couple years:

Dead Fish, Exploding Bowels Win Spoof Nobel Prizes

If you're unlucky you may have received numerous copies of such articles along with half joking recommendations that perhaps if fMRI scans are finding statistically significant activation in dead fish it might be time to change professions. To briefly summarize, a researcher by the name of Craig Bennett presented a dead fish with a simple task and demonstrated that fMRI data could be incorrectly used to show statistically significant results.

As humorous as Bennett's fishy fMRI study is it does bring to light real concerns about the application of fMRI data.  Unfortunately due to poor journalism and laziness a large portion of readers are drawing the wrong conclusions.

Bennett's study aimed to warn fellow researchers against succumbing to the dangers of data overload.  fMRI analysis by it's nature creates enormous quantities of data.  If improper statistical tests are applied to that data a great deal of false positives can result.  Here is a very rough analogy.

Suppose you had 100,000 six sided die and a table with 100,000 slots for die. Further suppose you hypothesized some areas of the table were more prone to having "6" appear facing upwards than others.  You proceed to toss your 100,000 die onto your table.  After shaking it around a bit to make sure all the die had fallen into a slot you then proceeded to analyze your table with the following conditions. A die that had it's "6" side facing upward was considered "significant".  However, since you know that a die will naturally show a "6" 1/6th of the time you have to add additional constraints. You decide that you'll look for clusters of significant ("6" side face up) die in order to confirm your hypothesis.  You determine how large a cluster of significant dice you require in order to prove your hypothesis that some areas of the table are more prone to "6" and proceed to check for such clusters.

As it happens the probability of any non edge "6" being adjacent to another "6" is slightly over 50%.  The probability of the second "6" being adjacent to another different "6" is approximately 42%.  This probability then holds constant for each additional "6" assuming no structure is created that wraps around to touch itself. These probabilities lead to the conclusion that clusters of seven "6" face up die will happen with a p-value of <.01, a highly statistically significant finding.  So then if we find clusters of seven "6" face up die is it reasonable to conclude that some areas of the table are in fact more prone to "6"?

Of course not.  In fact we would expect to find well over 500 such clusters on our die holding table.  If we conducted the experiment again we would find another 500 clusters and it's likely that at least a few of them would appear in the same or similar areas as in the first experiment.  Yet still no reasonable person would assume that some table areas magically caused "6" to be rolled more frequently.  This is simply the danger inherent in working with large amounts of data.  Even small percentages of false positives can be misleading when poorly interpreted.

Bennett's study was in no way about the failures of fMRI research.  It was a warning to researchers utilizing such tools that they must be vigilant in the use of their statistical methods.  Several statistical tools have been developed to safe guard against the perils of large data sets including multiple comparisons correction, false discovery rate, and family-wise error rate.  While the majority of researchers already make use of these statistical tools to ensure valid results Bennett's point was simply that everyone should be using them.

FMRI analysis, like all research tools has it's drawbacks.  Poor temporal resolution, restrictions due to bulky and restraining equipment, large initial investment requirements, and highly trained staff are only a few.  However, the idea that fMRI data is some sort of pseudoscience is not a valid conclusion to draw from Bennett's work.  In fact, even a small amount of investigation reveals that Bennett used additional statistical tests in his fish study to confirm that the famous statistically significant voxels were in fact false positives.

Ultimately Bennett shined a spotlight on a serious issue in a humorous way.  It's unfortunate that many readers have misunderstood his findings in a way that casts doubt on a useful experimental tool.  However, when you're dealing with nearly seven billion potential data points it's inevitable you'll find clusters of ignorance.




No comments:

Post a Comment