Delusions as explanations

21 September 2016 | Matthew Parrot (King's College London)

One idea that has been extremely influential within cognitive neuropsychology and neuropsychiatry is that delusions arise as intelligible responses to highly irregular experiences.

For example, we might think that the reason a subject adopts the belief that a house has inserted a thought into her head is because she has in fact had an extremely bizarre experience representing a house pushing a thought into her head (the case comes from Saks 2007; see Sollberger 2014 for an account of thought insertion along these lines). 

If this were to happen, then delusions would arise for reasons that are familiar from cases of ordinary belief. A delusional subject would simply be endorsing or taking on board the content of her experience. However the notion that a delusion is an understandable response to an irregular experience need not be construed along the lines of a subject accepting the content of her experience. 

Over a number of years, Brendan Maher advocated an influential alternative proposal, according to which an individual adopts a delusional belief because it serves as an explanation of her ‘strange’ or ‘significant’ experience (see Maher 1974, 1988, 1999). 

Crucially, for Maher, the content of the subject’s experience is not identical to the content of her delusional belief. Rather, the latter is determined in part by contextual factors, such as cultural background or what Maher calls ‘general explanatory systems’ (cf. 1974). Maher’s approach is often referred to as the ‘explanationist’ approach to understanding delusions (Bayne and Pacherie 2004).

Explanationist accounts have been especially popular with respect to the Capgras delusion that one’s friend or relative is really an imposter (eg, Stone and Young 1997) and delusions of alien control (eg, Blakemore, et. al. 2002). Yet, despite their prevalence, the explanationist approach has been called into questioned by a number of philosophers on the grounds that delusions are quite obviously very bad explanations.

For instance, Davies and colleagues argue:

‘The suggestion that delusions arise from the normal construction and adoption of an explanation for unusual features of experience faces the problem that delusional patients construct explanations that are not plausible and adopt them even when better explanations are available. This is a striking departure from the more normal thinking of non-delusional subjects who have similar experiences.’ (Davies, et. al. 2001, pg. 147; but see also Bayne and Pacherie 2004, Campbell 2001, Pacherie, et. al. 2006)

Indeed, since delusions strike most of us as highly implausible, it is hard to see how they could explain any experience, no matter how unusual. So if we want to understand delusional cognition along Maher’s lines, we will need to clarify the cognitive transition from anomalous experience to delusional belief in a way that illuminates how it could be a genuinely explanatory transition.

In what follows, I would like to distinguish three distinct ways in which a delusional belief might be thought to be explanatorily inadequate, each of which I think poses a distinct challenge for the explanationist approach.

The first concerns the phenomenal character of a delusional subject’s anomalous experience. Maher claims that the strange experiences we find in cases of delusion ‘demand’ explanations. 

But why is that? If the experiences that give rise to delusions do not themselves represent highly unusual states of affairs (as Maher seems to think), what is it about them that calls for or ‘demands’ an explanation? And does the particular phenomenal character of a ‘strange’ experience ‘demand’ a specific form of explanation, or are all ‘strange’ experiences relatively equal when it comes to their demands? 

The challenge for the explanationist is to clarify the phenomenal character of a delusional subject’s anomalous experience in such a manner that makes clear how it could be the explanadum of a delusion. Let’s call this the Phenomenal Challenge.

I actually think some very influential neuropsychological accounts have difficulty with the Phenomenal Challenge. To briefly take one example, Ellis and Young (1990) proposed that the Capgras delusion arises because of a lack of responsiveness to familiar faces in the autonomic nervous system. 

In non-delusional subjects, an experience of a familiar face is associated with an affective response in the autonomic nervous system, but Capgras subjects fail to have this response. Ellis and Young’s theory predicted that there would be no significant difference in the skin conductance responses of Capgras subjects when they are shown familiar verses unfamiliar faces, which has subsequently been confirmed by a number of studies. Thus it seems there is good evidence that a typical Capgras subject’s autonomic nervous system is not sensitive to familiar faces.

This seems promising but I don’t think it answers the Phenomenal Challenge because it doesn’t tell us anything about what a Capgras subject’s experience of a face is like. As John Campbell notes, ‘the mere lack of affect does not itself constitute the perception’s having a particular content.’ (2001, pg. 96) Moreover, individuals are not normally conscious of their autonomic nervous system (see Coltheart 2005). 

So it isn’t clear how diminished sensitivity within that system constitutes an experience that ‘demands’ an explanation involving imposters. To really understand why an anomalous experience of a familiar face calls for a delusional explanation, we need to get a better sense on what that experience is like.

A second worry raised in the previous passage is that delusional subjects adopt delusional explanations ‘even when better explanations are available’. Why does this happen? Why does a delusional subject select an inferior hypothesis from the set of those available to her? Let’s call this the Abductive Challenge.

To illustrate, let’s stick with Capgras. The explanationist proposal is that a subject adopts the belief that her friend has been replaced by an imposter in order to explain some odd experience. But even if we suppose the imposter hypothesis is empirically adequate, it is highly unlikely to be the best explanation available. 

As Davies and Egan remark, ‘one might ask whether there is an alternative to the imposter hypothesis that provides a better explanation of the patient’s anomalous experience. There is, of course, an obvious candidate for such a proposition.’ (2013, pg. 719) In fact, there seems to be a number of better available hypotheses; for example, that one has suffered brain injury or any hypothesis that appealed to more familiar changes affecting facial appearance, such as hair-style or illness.

Put simply, the Abductive Challenge is that even if we assume the cognitive transition from unusual experience to delusion involves something like abductive reasoning or inference to the best explanation, delusional subjects select poor explanations instead of better available alternatives. The explanationist needs to tell us why this happens (for some attempts see Coltheart et. al. 2010, Davies and Egan 2013, McKay 2012, Parrott and Koralus, 2015).

The final challenge for explanationism is, in my view, the most problematic. In the above passage, Davies and colleagues remark that delusions are extremely implausible. Along these lines, we might naturally wonder why a subject would even consider one to be a candidate explanation of her unusual experience. 

Why would she not instead immediately rule out a delusional hypothesis on the grounds that it is far too implausible to be given serious consideration? This concern is echoed by Fine and colleagues:

‘They explain the anomalous thought in a way that is so far-fetched as to strain the notion of explanation. The explanations produced by patients with delusions to account for their anomalous thoughts are not just incorrect; they are nonstarters. Appealing to the notion of explanation, therefore, does not clarify how the delusional belief comes about in the first place because the explanations of the delusional patients are nothing like explanations as we understand them.’ (Fine, et. al. 2005, pg. 160)

The task of explaining some target phenomenon demands cognitive resources and the idea that delusions are explanatory ‘nonstarters’ means that they normally would be immediately rejected. When engaged in an explanatory task, we know that a person considers only a restricted set of hypotheses and it seems quite natural to exclude ones that are inconsistent with one’s background knowledge. 

Since delusions seem to be in conflict with our background knowledge, this is perhaps why we find it difficult to understand how someone could think a delusion is even potentially explanatory (for further discussion, see Parrott 2016).

So why do subjects consider delusional explanations as candidate hypotheses? This is the final challenge for the explanationist. Let’s call it the Implausibility Challenge. Notice that whereas the abductive challenge asks why a subject eventually adopts one hypothesis instead of another from among a fixed set of available alternatives, the Implausibility Challenge is more general. It asks where these hypotheses, the ones subject to further investigation, come from in the first place.

Can these three challenges be overcome? I am optimistic and have tried to address them for the case of thought insertion (see Parrott forthcoming). However, I also think much more work needs to be done.

First, as I mentioned above, it is not clear that we have a good understanding of what it is like for an individual to have the sorts of experiences we think are implicated in many cases of delusion. Without such understanding, I think it is hard to see why some experiences make demands on a person’s cognitive explanatory resources. 

I also suspect that understanding what various anomalous experiences are like might shed more light on why delusional individuals tend to adopt very similar explanations.

Second, I think that addressing the Implausibility Challenge requires us to obtain a far better understanding of how hypotheses are generated than we currently have. In both delusional and non-delusional cognition, an explanatory task presents a computational problem. Which candidate hypotheses should be selected for further empirical testing? 

Although I have suggested that epistemically impossible hypotheses are normally ruled out, that doesn’t tell us how candidates are ruled in. Plausibly, there is some selection function(s) that chooses candidate explanations of a target phenomenon, but, as Thomas and colleagues note, we have very little sense of how this might work:

‘Hypothesis generation is a fundamental component of human judgment. However, despite hypothesis generation’s importance in understanding judgment, little empirical and even less theoretical work has been devoted to understanding the processes underlying hypothesis generation (Thomas, et. al. 2008, pg. 174).

The Implausibility Challenge strikes me as especially puzzling because I think we can easily see that certain strategies for hypothesis generation would be bad. For instance, it wouldn’t generally be good to consider hypotheses only if they have a prior probability above a certain threshold, because a hypothesis with a low prior probability might best explain a new piece of evidence.

Delusional cognition raises quite a few deep and interesting questions, many of which bear on how we think about belief formation and reasoning. And I have only scratched the surface when it comes to the kinds of puzzles that arise when we start thinking about the origins of delusion. 

But I hope that distinguishing these explanatory challenges will help us in thinking about the questions which need to be pursued if we are to assess the plausibility of the explanationist strategy.

References

Bayne, T. and E. Pacherie. 2004. “Bottom-up or Top-down?: Campbell’s Rationalist Account of Monothematic Delusions.” Philosophy, Psychiatry, and Psychology, 11: 1–11.

Blakemore, S., D. Wolpert, and C. Frith. 2002. “Abnormalities in the Awareness of Action.” Trends in Cognitive Science, 6: 237–242.

Campbell, J. 2001. “Rationality, Meaning and the Analysis of Delusion.” Philosophy, Psychiatry and Psychology, 8: 89–100.

Coltheart, M., P. Menzies, and J. Sutton. 2010. “Abductive Inference and Delusional Belief.” Cognitive Neuropsychiatry, 15: 261–87.

Coltheart, M. 2005. “Conscious Experience and Delusional Belief.” Philosophy, Psychiatry and Psychology, 12: 153–57.

Davies, M., M. Coltheart, R. Langdon, and N. Breen. 2001. “Monothematic Delusions: Towards a Two-Factor Account.” Philosophy, Psychiatry and Psychology, 8: 133–158.

Davies, M. and Egan, A. 2013. “Delusion: Cognitive Approaches, Bayesian Inference and Compartmentalization.” In K.W.M. Fulford, M. Davies, R.G.T. Gipps, G. Graham, J. Sadler, G. Stanghellini and T. Thornton (eds.), The Oxford Handbook of Philosophy of Psychiatry. Oxford: Oxford University Press.

Ellis, H. and A. Young. 1990. “Accounting for Delusional Misidentifications.” British Journal of Psychiatry, 157: 239–48.

Fine, C. M, J. Craigie, & I. Gold. 2005. “The Explanation Approach to Delusion.” Philosophy, Psychiatry, and Psychology, 12 (2): 159–163.

Maher, B. 1974. “Delusional Thinking and Perceptual Disorder.” Journal of Individual Psychology, 30: 98–113.

Maher, B. 1988. “Anomalous Experience and Delusional Thinking: The Logic of

Explanations”, in T. Oltmanns and B. Maher (eds.), Delusional Beliefs, Chichester: John Wiley and Sons, pp. 15–33.

Maher, B. 1999. “Anomalous Experience in Everyday Life: Its Significance for Psychopathology.” The Monist, 82: 547–570.

McKay, R. 2012. “Delusional Inference.” Mind and Language, 27: pp. 330–55.

Pacherie, E., M. Green, and T. Bayne. 2006. “Phenomenology and Delusions: Who Put the ‘Alien’ in Alien Control?” Consciousness and Cognition, 15: 566–577.

Parrott, M. 2016. “Bayesian Models, Delusional Beliefs, and Epistemic Possibilities.” The British Journal for the Philosophy of Science, 67: 271–296.

Parrott, M. forthcoming. “Subjective Misidentification and Thought Insertion.” Mind and Language.

Parrott, M. and P. Koralus. 2015. “The Erotetic Theory of Delusional Thinking.” Cognitive Neuropsychiatry, 20 (5): 398–415.

Saks, E. 2007. The Center Cannot Hold. New York: Hyperion.

Sollberger, M. 2014. “Making Sense of an Endorsement Model of Thought Insertion.” Mind and Language, 29: 590–612.

Stone, T. and A. Young. 1997. “Delusions and Brain Injury: the Philosophy and Psychology of Belief.” Mind and Language, 12: 327–364.

Thomas, R., M. Dougherty, A. Sprenger, and J. Harbison. 2008. “Diagnostic Hypothesis Generation and Human Judgment.” Psychological Review, 115(1): 155–185.