Skip to main content

Verified by Psychology Today

Philip R. Corlett Ph.D.

Why Are Delusions About Other People?

Evaluating theories of odd belief.

There has been a strident turn toward the social of late. In our everyday lives, confinement due to the pandemic has made our social appetites all the more acute. We have turned increasingly to social media to share our thoughts and consume information, almost unfiltered, from many thousands of potential interlocutors. In science, analyses of the social, even in basic preclinical animal models, has received increasing focus.

No doubt, we are an exquisitely social species. Furthermore, the explosion in popularity of paranoid conspiracy theories about the pandemic during confinement is testimony to how susceptible we are to believe things about others' intentions, particularly others that we don’t know very well.

Given the richness and complexity of our social interactions, institutions, and technologies, it is tempting to conclude that social cognition is the apotheosis of our abilities and that all other functions ought to be conceived of as scaffolding social functioning.

To that end, psychologists and neuroscientists are leveraging methodological advances and gathering larger, more interactive datasets, moving beyond a focus on the individual towards an n>1 neuroscience and psychology.

This is important, not least because — in light of such advances — scientists can prosecute their work through the lens of social milieu, and its impact on our cognition and comportment — from the impact of racial discrimination, through trauma and neglect — all known risk factors for mental illness, that have, hitherto not been considered enough in cognitive neuroscience and neuropsychiatry.

However, as areas of study grow and expand, they sometimes do so with changes in the implicit and explicit theoretical commitments that researchers endorse; commitments that often reach far beyond the scope and evidentiary basis of the research itself.

For example, by focusing on the social, we may stray into positing specifically social processes or modules, whose functioning is uniquely impaired in people with mental illness.

We may theorize that beliefs and delusions have a specifically social function.

Such developments can be immensely stimulating, generating new hypotheses — for example, that both delusions proper and say paranoid conspiracy theories, serve a coalitional function — making one feel like part of a group and benefiting from that association, whilst differing in the degree to which they are sensitive to feedback from the people within that group, with whom we interact.

I would like to caution against some of these ideas and commitments.

I’d like to suggest that when considering an explanation for a phenomenon like belief or delusion, we should not ask ourselves, “can I believe it?” (is there anything that recommends this particular idea?) but rather “should I believe it?” (weighing the data, pro, con, and as yet uncollected, is this a reasonable explanation, without challenging counter-evidence?).

This can be hard.

Ironically, this exercise might be the way to inoculate ourselves more broadly against misinformation, confirmation bias, and conspiracy.

Another important tool for evaluating beliefs comes from a 13th Century Franciscan Friar — William of Ockham — Occam’s Razor — which posits that the simplest explanation for some phenomenon is probably the correct one.

This tool ought to be particularly useful when dispelling conspiracy theories — which are often extremely elaborate, tenuous, and demanding of extreme coordination and contrition.

Leveled at scientific theories, William of Ockham would challenge us to posit the simplest mechanisms before turning to more ornamental processes.

Evolution is also relevant here. Are there simpler mechanisms from which more complex features were evolved?

Evolution by natural selection is a tinkerer, co-opting solutions to other problems.

What might the roots of a phenomenon like belief, delusion, or social cognition be?

We should focus on those as pathophysiological mechanisms, before considering more specific and bespoke processes.

Finally, it is important to consider the level of an explanation, as defined by the computational neuroscientist David Marr, a point made with regards to social cognition very recently:

Is the explanation cast in relatively abstract terms, concerned with why an organism or system might be doing what it does? What is the goal?

Is the explanation addressing what questions? What are the computations? What information is being processed and manipulated (or in our case, with regards to delusions and conspiracies, mishandled)?

Or is the explanation couched in terms of how questions? How are those computations implemented in the brain?

Theories that seem to be mutually exclusive might just be addressing different levels of explanation.

Theories that span all three computational levels should perhaps be considered more complete and favorable than those that don’t (that is, explanations from one level ought to constrain the explanations entertained at the other levels — algorithms that a brain can’t implement cannot explain what a brain is doing and why).

By thinking about phenomena in this way, we may gain insights into how to tease apart different theories, ask better questions of our data, and rule particular explanations in and out.

So, let’s turn to social theories of belief and delusion.

They are thought to be necessary because delusions are usually about social things (though beliefs more generally needn’t be about social things, they may serve to signal affiliation to a particular group).

People with delusions are concerned that others are talking about them, or controlling them, or planning to persecute them. Those proffering social theories of delusions claim that other theories can’t explain this, and so a social theory is necessary.

Can non-social theories explain the contents of delusions?

I think that they can.

For example, paranoia — the belief that others intend to harm us — flourishes in the context of unexpected uncertainty. During times of great historical upheaval, political, economic, and medical — people have sought a social explanation — usually a mysterious or poorly understood out-group that need not even be connected to the crisis. Having an enemy can actually be reassuring. Perceiving that enemy as a source of misfortune increases the sense that the world is predictable and controllable, that risks are not randomly distributed. In settings where a sense of control is reduced, people will compensate by attributing exaggerated influence to an enemy, even when the enemy’s influence is not obviously linked to those hazards.

There are supportive empirical studies here, including some of my own. Paranoia — perhaps the sine qua non of social delusions — is related to increased unexpected uncertainty (about non-social things) and poor non-social belief updating, an effect which we replicated in rodents administered methamphetamine (an elicitor of paranoia in humans). This is key, rats clearly evince social behaviors, but compared to humans they are a relatively asocial species.

Social theories claim to differentiate delusions from odd beliefs on the grounds that delusions are more inflexible, more immune to updating in light of contradiction by one’s peers.

There are relevant empirical data here too. In a groundbreaking series of studies, Emmanuelle Peters and Phillipa Garety queried a host of beliefs — formes frustres of the major delusional themes, cast in relative terms; "Does it ever feel as if the government isn’t telling us the truth about UFOs?" If a belief is endorsed, a series of three follow up questions is triggered: "How convinced are you that it's true?" "How preoccupied are you by it?" and "How distressed are you by that belief?"

Peters and her colleagues were able to find groups of people — for example, followers of New Religious Movements — who were not clinically unwell, but who were as convinced and preoccupied by their delusion-like beliefs as people with schizophrenia were with their delusions, but the people with schizophrenia were significantly more distressed.

These data trouble the social theorists' appeal to fixity as distinguishing delusion from odd belief.

If adherents of the social theory can readily reject irrationality as a criterion to discern delusions from other odd beliefs, on the grounds that non-clinical beliefs are often irrational, then they should similarly reject fixity as a cornerstone of delusion-hood.

We need only look at the hyper-polarized political climate for ample examples of the insensitivity of many non-delusional beliefs to updating.

Furthermore, Peters’ data demonstrate the importance of distress as a means of distinguishing delusions from more innocuous but exotic beliefs.

Under domain-general theories, distress magnifies, and is magnified, by uncertainty. Distress then becomes a means of drawing a mechanistic distinction between delusions and non-clinical delusion-like beliefs. Some of my own data suggest that non-social prediction errors, engaged in response to unsurprising events — which would engender uncertainty — correlate with the distress with which delusion-like beliefs are held in non-clinical volunteers.

What about the important risk factors that social theories can encompass? How do domain-general theories accommodate them? Risk factors with an inherently social focus, like trauma, neglect, and abuse, can be brought under the domain-general explanatory umbrella too — any factor that violates expectations is consistent with domain-general as well as domain-specific theories. We might reasonably expect, based on innate priors and early learning, that caregivers should protect us. Traumatizing experiences violate those expectations and yield uncertainty and distress, leading to delusion-like explanatory beliefs.

What about Occam’s razor and Marr’s levels of explanation?

Intriguingly, social learning and inference may have evolved from more basic non-social learning mechanisms — such as the reinforcement learning and Bayesian belief updating algorithms that appear to underwrite perception, action selection, and prosecution, as well as non-social decision-making. Central to both is prediction error, the mismatch between expectation and experience, that works as a teaching signal — to update both social and non-social beliefs.

How then might we distinguish a specifically social account from a domain-general belief updating account? I would suggest focusing on the algorithmic (what) and implementational (how) levels of explanation, and their overlap. If there are brain regions that handle specifically social prediction errors (and there may be), then paranoia should — according to specifically social theories — involve problems in those regions in particular, and not deficits in circuits that handle prediction errors more generally (like eulaminate limbic cortices).

Finally, much of the empirical work to which social theories of delusion refer involves non-clinical data. It will be enlightening to see how people with clinical delusions behave under conditions of coalitional threat. Without those data, it is hard to evaluate how these theories fare, particularly in distinguishing clinical delusions from non-clinical delusion-like beliefs.

Clearly, we can believe social theories of delusions.

But I am not sure we should, just yet.