Interviewer: Thank you for speaking with me Adam. Obviously the retraction of your language processing paper in 2012 was a significant career setback. Some people allege that you knowingly manipulated data in order to get the results that would merit publication. How do you respond to those allegations?
Adam: At no time did I modify data in any way in order to manipulate results.
Interviewer: So no data was changed at any point?
Adam: That is correct.
Interviewer: Was any data discarded for any reason?
Adam: Some data sets were discarded as is routine during a study of this nature.
Interviewer: And what were the reasons for ignoring those data sets?
Adam: In some cases post scan surveys revealed characteristics of subjects that made them unsuitable for the experiment. Other data sets were removed due to excessive head motion or an inability to normalize their data with our chosen methods.
Interviewer: So in other words some people moved around too much or had previously unknown neuorological conditions?
Adam: Yes. Or their head orientation or brain shape did not easily conform with a standard fitting model.
Interviewer: If you had included those data sets would your results have remained the same?
Adam: To include those data sets we would have had to adjust our analytic framework. It's possible that doing so would have altered the end results.
Interviewer: Several labs have attempted to replicate your findings but have as yet been unable. Do you believe your initial study to be in error?
Adam: In the time since publication it has become evident that our initial publication was likely incorrect. Unfortunately there is always a small chance of a false positive in research and this time we got unlucky.
Interviewer: Your critics say that in this case you made your own luck. They claim you selectively eliminated data and chose analytical methods which would lead to a positive result. In particular Nick Harrison has been quoted as saying, "Nearly any data set and analytic method except those used would have lead to a negative result." What's your response to that?
Adam: No doubt there are statistical techniques which would change the results of the study. I reject the idea that I chose my method in order to manipulate the result.
Interviewer: Was your method of analysis determined before data was collected?
Adam: It was not.
Interviewer: Was your hypothesis preregistered?
Adam: It was not.
Interviewer: So it would have been possible for you to choose a protocol and data set combination that would lead to a positive result.
Adam: Possible, but it did not occur.
Interviewer: How many subjects were included in this study?
Adam: Due to budget limitations we were only able to include fourteen participants.
Interviewer: How many participants had their data discarded for any reason?
Interviewer: Is twelve subjects enough to reach significant conclusions?
Adam: Due to the high cost of fMRI studies twelve subjects is not unusual. It's possible that it led to an under-powered study but it is not unusual.
Interviewer: Does a small group of participants make it easier or more difficult to change results by altering analytic protocols and discarding data sets.
Adam: Experiments with few participants are altered by a greater degree when a small number of subject's data is discarded. In that way it would make altering the results easier.
Interviewer: Were you under pressure to publish something significant at the time of publication?
Adam: In academic research there is always a pressure to publish.
Interviewer: So there was no unusual pressure at that time?
Adam: The high cost of fMRI studies makes all such studies a significant investment. It would have been disappointing to have invested so much and not discovered anything of consequence. Beyond that, no.
Interviewer: Some have claimed that you didn't initially share your raw data in an effort to cover up fraud. Was that the case?
Adam: Data sharing in the field is a rarity more than the norm. As soon as the publication was called into question I made the raw data freely available to anyone who requested it.
Luckily Adam Wilkinson and his research are fictional. He was made up in order to demonstrate the ease by which a researcher can guide their studies results.
In most fMRI studies some data is discarded. It's an unfortunate side effect of using such a precise instrument on subjects who have difficulty remaining still. An unscrupulous researcher, such as Adam, can determine which data sets will move results towards a predetermined end when eliminated.
Similarly there are dozens of analytical methods which a researcher can use to perform statistical comparisons. Each of these methods will give slightly different results and some will almost certainly give a positive result. Here's an example.
Let's say Adam had thirty people scheduled to receive scans. After five people have been scanned he begins to check his data between scans to see if he has a statistically significant result. On the twelfth scan the data barely tips into the statistically significant range and Adam cancels the remaining scans. The data for scans 1-11 didn't show what Adam was looking for but due to an anomaly on the 12th scan Adam got lucky and the data worked out for him. Had scans 13-30 been conducted the data would have returned to the mean and Adam wouldn't have the result he sought. Since Adam didn't preregister his hypothesis or protocol he can peek at the data for any number of statistical tests and stop whenever any of them dip into the range of significance. Unfortunately after the fact there's really no way to tell if Adam did this or not.
FMRI data is by it's nature very "noisy". It requires quite a bit of pre-processing before it can be analyzed in a meaningful fashion. How this pre-processing is done can wildly change results. Adam can adjust his pre-processing parameters in a large variety of ways in order to manipulate his results. As long as Adam keeps his parameters reasonable no one will question this aspect of his research.
FMRI studies look at a particular region of interest in the brain. If Adam is conducting research on language processing than there's a general region that should be his focus. But the exact defined area of that region is a matter of Adam's discretion. Does he outline a region that's just a little more anterior than usual thus including a bit of a the visual cortex? If so he can show activation in his region of interest that really is attributable to the visual cortex, not any sort of language processing. This sort of deception is detectable but only with close inspection by a proficient expert. Even then Adam could present it as a simple error that wouldn't change results.
What comparison does Adam make? Does he compare individuals to their own baseline scan or to a group normalized scan? Does he group subjects into male and female and conduct comparisons within those groups only? Perhaps divisions are created based on gender and age or on age alone. Each comparison made allows Adam another chance at a statistically significant result. Since he didn't state his hypothesis before starting the experiment he can test as many things as he can imagine until one by chance comes up as significant.
It's easy to see that given all these factors a researcher has thousands of possible end results. The sheer volume of possibilities nearly guarantees that some null hypothesis can be rejected and the experimenter can find a publishable result. Most importantly all of this can be done without "changing" any of the data.
Adam's answers in the interview would in most cases be typical of a researcher in his field. Small studies, without preregistered protocols, and with hidden data sets are routine in fMRI research. Unfortunately this presents a ripe opportunity for researchers to manipulate their protocols in order to meet publishing pressures.
Many researchers would argue that fMRI research is by it's nature exploratory and observational. However, it is dishonest to make such a claim while at the same time hiding behind the all mighty p-value. Either you are hypothesis testing, in which case a p-value is valid and you know your hypothesis before hand or you're conducting an observational study and a p-value isn't relevant. In this case you can't have your cake and eat it too.
That's all for this week. Until next time stay safe and rationale.