Interpretation of functional MRI data called into question.

Bucky

Member
Software faults raise questions about the validity of brain studies.
http://arstechnica.com/science/2016...y-brain-activity-may-be-exaggerating-results/

Is this really as bad as it sounds? The authors certainly think so. "This calls into question the validity of countless published fMRI studies based on parametric clusterwise inference." It's not clear how many of those there are, but they're likely to be a notable fraction of the total number of studies that use fMRI, which the authors estimate at 40,000.
Ouch! :eek:
 
This relates to a paper I have linked to several times before:

http://pps.sagepub.com/content/4/3/274.short?rss=1&ssource=mfc

Functional magnetic resonance imaging (fMRI) studiesofemotion, personality, and social cognition have drawn much attention in recent years, with high-profile studies frequently reporting extremely high (e.g., >.8) correlations between brain activation and personality measures. We show that these correlations are higher than should be expected given the (evidently limited) reliability of both fMRI and personality measures. The high correlations are all the more puzzling because method sections rarely contain much detail about how the correlations were obtained. We surveyed authors of 55 articles that reported findings of this kind to determine a few details on how these correlations were computed. More than half acknowledged using a strategy that computes separate correlations for individual voxels and reports means of only those voxels exceeding chosen thresholds. We show how this nonindependent analysis inflates correlations while yielding reassuring-looking scattergrams. This analysis technique was used to obtain the vast majority of the implausibly high correlations in our survey sample. In addition, we argue that, in some cases, other analysis problems likely created entirely spurious correlations. We outline how the data from these studies could be reanalyzed with unbiased methods to provide accurate estimates of the correlations in question and urge authors to perform such reanalyses. The underlying problems described here appear to be common in fMRI research of many kinds—not just in studies of emotion, personality, and social cognition.

I also think that although fMRI may be useful as a clinical tool - which part of the brain is going wrong - it doesn't tell us much about the nature of consciousness. I liken it to trying to understand the working of a computer by measuring the heat output of the various chips as various pieces of software is run.

Such an exercise would be particularly futile because the memory in a computer is normally paged - so the physical address of a piece of memory (which could probably be associated with its chip number) is nearly irrelevant - a program can use whatever memory is available, and the addresses are mapped by the hardware. While there is probably no exact parallel with brain function, the fact that the brain can reconfigure after damage (plasticity) may mean this analogy isn't so far fetched.

David
 
Last edited:
Software faults raise questions about the validity of brain studies.
http://arstechnica.com/science/2016...y-brain-activity-may-be-exaggerating-results/

Is this really as bad as it sounds? The authors certainly think so. "This calls into question the validity of countless published fMRI studies based on parametric clusterwise inference." It's not clear how many of those there are, but they're likely to be a notable fraction of the total number of studies that use fMRI, which the authors estimate at 40,000.
Ouch! :eek:

Thanks, that's helped improve my understanding of fMRI type measurements a little more.

Here's another one - now that you've pointed it out - dealing with similar issues to do with inappropriate use of clusterwise thresholds to infer significance, using statistical methods that are based on assumptions (parametric)...

Here the authors are focusing on the liberal thresholds which are set in the software by the researchers to produce their results... using the default threshold option in the software seems to be the norm... suggesting to me that many medical researchers don't really have a a very good understanding of the equipment, the statistical software, and it's limitations...

"...we show that the use of liberal primary thresholds (e.g., p < .01) is endemic, and that the largest determinant of the primary threshold level is the default option in the software used..."

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4214144/

I take all those consciousness/event related fMRI types studies which measure blood flow and oxygen etc with a pinch of salt since about 3 years ago... when it became clear that what researchers were inferring from these measurements wasn't very solid at all... without knowing what the *actual* processes going on in the brain really are... all many of them were really getting, were pretty coloured pictures... that might, or might not mean something.
 
If humans could just understand that we still have so much to learn. I recently read an article that 90 percent of studies are false, fudged, flawed.
 
If humans could just understand that we still have so much to learn. I recently read an article that 90 percent of studies are false, fudged, flawed.
But 100% of the evidence for flawed practices in science comes from scientific studies, is that than also 90% false?
 
Back
Top