Skip to main content

Verified by Psychology Today

Attention

Hacked Off

Do you think your brain hasn't been hacked? Think again.

A recent documentary recommended to me by self-styled “bad boy of Brexit” Arron Banks1 has proved both fascinating and divisive in equal measure. Called The Great Hack, it is an account of the influence of the firm Cambridge Analytica in a number of recent political events. The US election, the UK referendum on Brexit, and the recent elections in Trinidad and Tobago all got attention from the now-defunct Cambridge Analytica. Were they some shadowy group of nefarious geniuses breaking into our private lives? Yes and no.2

Back in the past, market researchers had to rely on costly (and very limited) surveys which were only filled out by the sort of people that fill out surveys. Later on, what is technically called “market segmentation” got a lot more sophisticated, and you could buy datasets of people who might want your products, from clusters of information cobbled together from their buying habits (such as your loyalty card). So, you might target your Jaguar Car adverts at High Achievers or your nappy adverts at Young Couples. Those famous Guinness ads (the ones with dolphins, fronted by the recently deceased Rutger Hauer) were specifically targeted at people who thought that they couldn’t be fitted into marketing categories. Irony is no friend to human free will.

However, market researchers had a problem. Lots of people didn’t have loyalty cards, and lots of people didn’t do surveys. How to access their preferences and quirks? You already know the answer to this. Along came social media and, like a bunch of sheep lining up to be shorn, we all willingly (“willingly”—you could hardly stop us) shared some of our most intimate data—data which would have cost a fortune to buy—with total strangers. Strangers like Cambridge Analytica. And they, of course, analyzed it, just as market researchers have always done. So far so ordinary.

But then, it occurred to some smart people that these clusters of data could be used to predict some very interesting things. Who you were likely to vote for. What sorts of issues got your emotions running hot. And, as someone in The Great Hack states plainly—angry or frightened people make mistakes. And with many votes sliding down a knife-edge—the 2016 US election was decided by some 70,000 voters in a few swing states—the ability to nudge a tiny percentage of the population one way or another suddenly has worldly significance.

A rather sniffy recent review in The Economist tries to argue out that The Great Hack is misnamed—because no computer systems were hacked into in the course of targeting voters with (mis)information designed to make them angry or frightened, and so sway their voting habits. The Economist’s review misses the point big time. (3) It’s not computers that were hacked—it was us.

Wait? Are you saying my brain is Kompromat?

All nervous systems have to make judgments under uncertainty—because they have limited time. So, they have evolved tricks and shortcuts that have evolved to allow them to give themselves good enough predictions. What this results in, is a number of well-documented cognitive and emotional glitches. Don’t think of these as mistakes in nature—they are clues to how the underlying brain systems work and how they evolved. However, it does mean that we have a bunch of emotional and cognitive weaknesses that can be exploited. In short—we can believe things for bad reasons and fail to believe things for good ones.

Thinkingishard
Glitches cheat sheet
Source: Thinkingishard

We have good reason to believe that the sorts of biases in belief in the human system are systematic. If the costs of falsely believing that something is true, and the costs of incorrectly thinking that something is false are uneven—then our systems bias us towards the less costly error. This is called error management theory and it’s a large part of what we teach in cognitive science. Brains are biased to make errors that, in the long run of evolution, are biased towards the least costly error.

“Least costly” here means in terms of reproduction, not in terms of believing stupid things. For example, if I (falsely, alas) believe that I am god’s gift to women and, therefore, persist in showering gifts and attention on someone who isn’t reciprocating, I am less likely to wise up than my amused friends might assume from other features of my brains and education.

The classic example is that women want hard to fake signs of commitment, while men over-perceive signs of sexual interest. I’ve discussed this in detail here. But, I have drifted away somewhat from the starting point—of The Great Hack of our politics.

You might be thinking that “this couldn’t happen to me.” So, let me hack into your brain—just a little. Nothing up my sleeves…you will notice that at no time do my hands leave the ends of my arms...

Risky?

Let’s say you are in charge of public health and an outbreak of measles is predicted to kill 600 people. You have two alternatives

1. Program A: Will save 200 people

2. Program B: Has a 1/3 chance of saving everyone (all 600) but a 2/3 probability that everyone dies.

Which one do you pick? A or B?

Hold that thought.

Now, imagine the same situation with the measles, only this time the programs are as follows

3. Program C: Will kill 400 people

4. Program D: has a 1/3 probability that no-one will die, and a 2/3 probability that 600 will die

Now—which one do you pick? C or D?

In the classic study done by Kahneman and Tversky, in the first case 72 percent preferred A to B, while in the second case, 78 percent prefer D to C.

You have probably already guessed that A and C are equivalent. Here is one worrying aspect: Even when this inconsistency pointed out, many people stand by their decisions (4).

This isn’t an intelligence issue, as can be shown by the fact that you can generate the same effect when you frame policies to doctors in terms of risk rates, rather than survival rates. A highly robust glitch in human reasoning, in response to informational overload, is risk aversion.

Philip Toscano/ PA
Feeling "certain"?
Source: Philip Toscano/ PA

And what this means is that you can make people feel overwhelmed, threatened even (perhaps with bogus pictures of non-existent invasions?) then they will tend to revert to what seems to them to be the least risky option when given particular choices. Which you then offer to them. Maybe you can write this offer on the side of a bus if the fancy takes you. And you’ve hacked them.

References

1) Arron Banks recommended that it be taken off the air, and has hired a bunch of lawyers to try to make this happen. This (of course) immediately made me want to watch it. Either Arron Banks isn’t as smart as he thinks he is, or he is an amazing philanthropist playing an elaborate game of double bluff. Which is it, I wonder?

2) https://www.netflix.com/ie/title/80117542

3) https://www.economist.com/prospero/2019/07/24/the-great-hack-is-a-misin…

4) The classic study is by Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. science, 185(4157), 1124-1131. See also Kahneman, D., Slovic, S. P., Slovic, P., & Tversky, A. (Eds.). (1982). Judgment under uncertainty: Heuristics and biases. Cambridge university press.
For the follow-up study into people doubling down see Dawes, R. M., & Kagan, J. (1988). Rational choice in an uncertain world (pp. 196-210). New York: Harcourt Brace Jovanovich.

advertisement
More from Robert J. King Ph.D.
More from Psychology Today