Can fMRI be trusted?

 

The use of brain images is often criticized. A recent article by M Farah looks at what ‘the kernals of truth’ behind the critiques are and how safe we are to trust the images. (citation below). She is concerned by the confusion of legitimate worries about imaging and false ones.

The first criticism that she addresses is that the BOLD signal in fMRI is from oxygenated blood and not brain activity. True, but she says that scientific measurements are very often indirect. What is important is the nature of the link involved in the measurement. In this case, the answer is that even though the exact nature of the link between brain activity and blood flow is not known – it has been established that they are causally related. One thing she does not make a point of is that there is not necessarily a time lag in the blood flow. The flow is controlled by astrocytes and these glia appear (at least in the case of attention) to anticipate the need for increased blood flow. “In many ‘cognitive’ paradigms, blood flow modulation occurs in anticipation of or independent of the receipt of sensory input” - Moore & Coa (citation below).

There are complaints that the presentation of images are fabrications of scale and colour. The colours are misleading and the differences they represent can be tiny. Scales can be arbitrary. Farah points out that this is true across science. Graphs and illustrations are arbitrary and exaggerated in order for them to be easier for readers to see and understand and this is not particularly prominent in fMRI images.

A number of different criticisms have been made about the emphasis that imaging puts on localization and modular thinking. Again this is somewhat true. But – only early imaging did localization for localization’s sake to prove the validity of the method. Looking for activity in locations that had been previously shown to be involved in a particular process. Today’s imaging has gone past that. Another related gripe is that there are no psychological hypotheses that can be decisively tested by imaging. Her answer is that this is true of all psychological methods, none are decisive. Never the less, imaging has helped to resolve issues. There are complaints that imaging favours the production of modular hypotheses, biasing research. But the questions that science, in general, asks are those it has the tools to answer. This is not new, not true only of imaging and not an unreasonable way to proceed.

Farah does agree with criticism of ‘wanton reverse inference’ but only when it is wanton. Although you can infer that a particular thought process is associated with a particular brain activity – you cannot turn that around – a particular brain activity does not imply a particular thought process. An area of the brain may do more than one thing. An example of this that I still notice occurs is the idea that the amygdala activity has to do with fear when fear is only one of the things the amygdala processes. Farah uses wanton because this criticism should not be applied to MVPA (multivoxel pattern analysis) which is a valid special case of back and forth of inference.

The statistics of imaging are another area where suspicion is raised. Some are simply concerned that statistics is a filter and not ‘the reality’. But the use of statistics in science is widespread; it is a very useful tool; stats do not mask reality but gives better approximates it than does raw data. There are two types of statistical analysis that Farah does feel are wrong. They are often referred to as dead salmon activity (multiple comparisons) and voodoo correlations (circularity). These two faulty statistical methods can also be found in large complex data sets in other sciences: psychometrics, epidemiology, genetics, and finance.

When significance testing is carried out with brain imaging data, the following problem arises: if we test all 50,000 voxels separately, then by chance alone, 2,500 would be expected to cross the threshold of significance at the p<0.05 level, and even if we were to use the more conservative p<0.001 level, we would expect 50 to cross the threshold by chance alone. This is known as the problem of multiple comparisons, and there is no simple solution to it…Statisticians have developed solutions to the problem of multiple comparisons. These include limiting the so-called family-wise error rate and false discovery rate.”

Some researchers first identified the voxels most activated by their experimental task and then—with the same data set—carried out analyses only on those voxels to estimate the strength of the effect.Just as differences due to chance alone inflate the uncorrected significance levels in the dead fish experiment, differences due to chance alone contribute to the choice of voxels selected for the second analysis step. The result is that the second round of analyses is performed on data that have been “enriched” by the addition of chance effects that are consistent with the hypothesis being tested. In their survey of the social neuroscience literature, Vul and colleagues found many articles reporting significant and sizeable correlations with proper analyses, but they also found a large number of articles with circular methods that inflated the correlation values and accompanying significance levels”

Finally she tackles the question of influence. The complaint is that images are too convincing, especially to the general public. This may be true in some cases but attempted replication of many of the undue influence studies have not shown the effect. It may be the notion of science rather than imaging in particular that is convincing. Or it may be that people have become used to images and the coloured blobs no longer have undue impact. There is also the question of resources. Some feel that image studies get the money, acceptance in important journals, interest from the media etc. There seems to be little actual evidence for this and it may often be sour grapes.

Should we trust fMRI? Yes, within reason. One single paper with images or without cannot be taken as True with that capital T, but provided the stats and inferences are OK, images are as trust worth as other methods.
ResearchBlogging.org

Farah MJ (2014). Brain images, babies, and bathwater: critiquing critiques of functional neuroimaging. The Hastings Center report, Spec No PMID: 24634081

Moore, C., & Cao, R. (2008). The Hemo-Neural Hypothesis: On The Role of Blood Flow in Information Processing Journal of Neurophysiology, 99 (5), 2035-2047 DOI: 10.1152/jn.01366.2006


This post was chosen as an Editor's Selection for ResearchBlogging.org

5 thoughts on “Can fMRI be trusted?

  1. Jamal

    Right here is the perfect website for everyone who wishes to find out about this topic.
    You understand a whole lot its almost hard to argue with
    you (not that I really will need to…HaHa).

    You certainly put a brand new spin on a subject that’s been discussed for a long
    time. Wonderful stuff, just wonderful!

    Reply
  2. 3 in 1 trike

    Very nice post. I just stumbled upon your blog and wanted to say that I have
    truly enjoyed surfing around your blog posts.

    After all I will be subscribing to your feed and I hope you write again very
    soon!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *