*Bad reasoning as well as good reasoning is possible; and this fact is the foundation of the practical side of logic.*

~Charles Sanders Peirce, found here

Why is it so hard to prove that matter makes mind? Neuroscientists are looking for proof with methods such as fMRI, which flags differences in metabolic activity with differences in color. With fMRI we can see which areas of the brain are working hard when the person/organism is engaged in a particular activity. We already know what the person is doing (e.g., looking at a face) and we see that one particular area (say the fusiform gyrus) is metabolizing a lot of glucose. (Over-) Stated in syllogistic form, the idea is “If the person looks at a face, the fusiform gyros is active.” One can dig deeper, e.g., by asking whether other areas are also active, or if the fusiform gyrus is also active when the person is doing other things.

Syllogisms belong to the world of logic. In the messy world of behavioral science and physiology, *modus ponens* is demoted to probability form. We can say that there is a probability that the fusiform gyrus is active if the person is looking at a face. For generality’s sake, let’s say that M denotes the mental activity of looking and A the activity in the brain Region Of Interest (ROI). We are interested in the conditional probability of A given M. Inasmuch as p(A|M) is high the *forward inference* from M to A is easy. Knowing M we can infer A. But this inference is still incomplete. As noted above, we also need to know the probability of A under other conditions (p(A|~M), e.g., when the person is not looking at a face). Now we see what’s informative is the ratio of p(A|M) over p(A|~M). Indeed, this ratio, which is also known as the Bayes factor, is one way of expressing a correlation between A and M. If the ratio is > 1, the correlation coefficient phi > 0.

Mental work is statistically associated with physiological activity. Neuroscientists agree that fMRI and other imaging techniques are fundamentally correlational. They cannot speak to the question of what type of activity causes what type of mentation. They only show what type of mental event predicts (is associated with) what type of brain activity. Descartes being dead and all (though Baptists and New-Agers abound), few scientists would claim that *this* type of mental event causes *that* type of physiological activity. That would be heresy and a desecration of materialist commitments. I myself would consider it irrational. This leaves us with an asymmetry (which is always thought-provoking): Whereas it is nutty to claim that M causes A, it is unnutty to claim that A causes M. So why not try to extract a causal path leading from A to M from the data?

Finding that path is what many neuroscientists are eager to do, but it gives them a headache. Let’s go back to the syllogism. We stated that If M, than A. The only valid inference (*modus tollens*) we can draw from A requires us to deny it. If not A (~A), then not M (~M). But that leaves us unfulfilled. We want to conclude that if A, then M. If *this* brain area is active, then *that* mental event is going on. This sounds like a causal claim. A causes M. But to infer M from A is a *reverse inference*, which is the logical fallacy of *affirming the consequent*.

If we are dealing with probabilities, and not with logical implications, this may not be a problem. If the inference from M to A is not certain and the inference from A to M is not certain either, both could still have some probabilistic value. Perhaps, but an asymmetry remains, and that is because the typical procedure manipulates M and measures A. So p(A|M) is directly provided by the method, whereas p(M|A) is estimated. It would be a mistake to assume that p(M|A) = p(A|M). This equality occurs only if the base rates of A and M are the same. Otherwise, the two conditional probabilities are different, and the differences can be huge.

Thoughtful neuroscientists are aware of the problem of reverse inference, although they seem to forget that thoughtful statisticians and clinical psychologists have written about this issue since the 1950s (e.g., Dawes, 1988; Meehl & Rosen; 1955). In a widely cited paper, Poldrack (2006) characterizes the problem in its appropriate statistical framework, while also telling us not to worry too much. He writes that “Cognitive neuroscience is generally interested in a mechanistic understanding of the neural processes that support cognition rather than the formulation of deductive laws. To this end, reverse inference might be useful in the discovery of interesting new facts about the underlying mechanisms” (p. 60). I counsel caution. It is true that no one has asked neuroscientists to formulate deductive laws, but they should respect those that we already have.

When dismissing logic, something else must be put in its place. Poldrack suggests pragmatism. “Philosophers have argued that this kind of reasoning (termed ‘abducive inference’ by Pierce [*sic*]) is an essential tool of scientific discovery” (p. 60). Consider a numerical example. Suppose we have a total of 105 participants. Sixty get a task to work on that we know requires mental process M, so that p(M) = .57. With fMRI we observe that 70 have activation in the ROI so that p(A) = .67. Further suppose that there is no correlation between M and A. The four joint frequencies (probabilities) are 40 (.38) for M and A, 20 (.19) for M and ~A, 30 (.29) for ~M and A, and 15 (.14) for ~M and ~A. The conditional probability of A given M is p(A|M) = .67. Forward inference is pointless because p(A|M) = p(A). The base rate says it all.

What is desired for the reverse inference is the probability of M given A, which is p(M|A) = .57. This probability can be computed by dividing the number of people who have both M and A (40) by the total number of people who have A (70). It is also possible to use Bayes’s Theorem, which is what Poldrack suggests. Namely, p(M|A) = p(M) x (A|M)/p(A). We notice that p(M|A) goes up if p(M) goes up or if p(A) goes down. Poldrack notes that the latter is difficult (I agree) and therefore turns to the former “to improve confidence in reverse inferences . . . increase the prior probability of the cognitive processes in question” (p. 62). This sounds like an attractive strategy because “the prior is to some degree under the control of the experimenter, as he/she can often choose experimental tasks that maximize the prior probability of a particular process being engaged” (pp. 62-63).

But does it work? Starting with the numbers of our example, we increase the base rate of M from .6 to .8, while maintaining the independence of M and A. The conditional probability p(A|M) remains at .67, whereas its inverse, p(M|A) increases from .57 to .8, thus tracking p(M). No correlation, no inference.

A possible objection to this demonstrationis that it may only be valid if there is no correlation between M and A. If there is a correlation, then perhaps an increase in the base rate of M will strengthen the reverse inference from A to M.

To check this out, I drew up the following frequencies:

We find that p(M) = .57, p(A) = .52, and p(A|M) = .67. A reverse inference is possible because p(M|A)/p(M|~A) = .73/.4 = 1.82. We also note that phi = .33.

Next we increase the base rate of M to .8, using these frequencies.

Holding p(A|M) and p(~A|~M) constant at .67 requires an increase of the base rate of A from .52 to .6. What is the result? The reverse inference that we had with the lower base rate of M has become

**weaker**. Now, p(M|A)/p(M|~A) = .89/.67 = 1.33 (with a corresponding drop of phi from .33 to .28). This result shows that the strategy of raising the base rate of M is counterproductive.

Poldrack knows that it would be better to increase the *specificity* of A. If ROIs could be found that are selective in their activity such that p(~A|~M) is high, the correlation between M and A, and thus reverse inference, would be strengthened. Still, the old correlation-does-not-entail-causation riddle remains. Solving the reverse inference problem is necessary for progress on the matter-causes-mind hypothesis, but it is not sufficient.

**An academic problem?**

Should you care when academics argue over how to draw inferences? I think sometimes you should. During the 2007 national election season, Iacoboni and colleagues (2007) published an article in the NY Times with the evocative title This is your brain on politics.

They scanned the brains of 20 undecided voters while showing the images of candidates Clinton (H.), Obama, Romney, McCain, among others. Activity in several ROIs was recorded, yielding a host of reverse inferences. The scans suggested mixed emotions of positive interest, anxiety, disgust, and empathy. Notably, the authors did not make predictions about how their research participants would vote.

Only 3 days later, 17 watchful and responsible, neuroscientists -- including Poldrack -- replied with a letter to the editor. They objected to the oversimplified story presented to the reading public. To quote the group: "As cognitive neuroscientists who use the same brain imaging technology, we know that it is not possible to definitively determine whether a person is anxious or feeling connected simply by looking at activity in a particular brain region. This is so because brain regions are typically engaged by many mental states, and thus a one-to-one mapping between a brain region and a mental state is not possible."

Dawes, R. M. (1988). *Rational choice in an uncertain world*. San Diego, CA: Harcourt Brace Jovanovich.

Meehl, P. E., & Rosen, A. (1955). Antecedent probability and the efficiency of psychometric signs, patterns, or cutting scores. *Psychological Bulletin, 52*, 194-216.

Poldrack, R. A. (2006). Can cognitive processes be inferred from neuroimaging data? *TRENDS in Cognitive Sciences, 10*, 59-63.