Skip to main content

Verified by Psychology Today

Artificial Intelligence

Deepfakes Can Be Used to Hack the Human Mind

Digital copies of persons are easy to manipulate and can easily impact viewers.

Key points

  • A Deepfake is a digital copy of a person that can be manipulated into saying anything.
  • Deepfakes lead to automatic and self-reported attitudes that are just as strong as those established by genuine online content.
  • Deepfakes can impact viewers, even when they are aware that Deepfaking is possible, and detect that they are being exposed to it.
  • Deepfakes may be used to hack the human mind for maladaptive ends.
Janeb13/Pixabay
Deepfakes of President Obama have been created.
Source: Janeb13/Pixabay

by Sean Hughes

"Seeing is believing"—or so it used to seem. Over the past several years, a branch of artificial intelligence (AI) known as "deep learning" has emerged, and with it, a new technology that may undermine how much we can trust our own eyes and ears. Deep learning allows for a person’s face, voice, or writing style to be fed to a computer algorithm and for a "Deepfake" to be created. A Deepfake is a hyper-realistic digital copy of a person that can be manipulated into doing or saying anything (click here to see a Deepfake of President Obama).

Although this new technology has many beneficial uses, it’s also ripe for abuse. Deepfakes are increasingly being used to harass and intimidate political activists, and harm those in the business, entertainment, and political sectors. Female celebrities are being Deepfaked into highly realistic pornographic scenes, while worry grows that politicians could be made to "confess" to bribery or sexual assault. Such disinformation may obviously distort democratic discourse and election outcomes.

Although laws and technology are undoubtedly necessary for detecting and regulating this new technology, they cannot guarantee that we will be completely protected from exposure to malicious online content. What is needed then, alongside laws and technology, is a greater understanding of how Deepfakes influence our conscious and unconscious minds.

Can Deepfakes Manipulate Our Automatic First Impressions?

We recently carried out a set of experiments with thousands of participants to start answering this question. We wanted to know if a single brief exposure to a Deepfake would be enough to bias our first impressions of other people.

Participants in our studies were asked to navigate to YouTube and watch a short video of "Chris," a person they were encountering for the first time. We manipulated the content of the video they encountered so that they would either form a positive or a negative impression of Chris. We also manipulated the type of video they encountered, so that some watched a genuine video of Chris while others watched a Deepfake of him.

In our initial studies, Deepfakes were created using a "cut-and-paste" method: we extracted ("cut") Chris’ words and actions from context A and inserted ("pasted") them into an entirely different video of him in context B. This allowed us to put words into his mouth and have him "confess" to either virtuous or malicious actions he had never previously committed.

We found that, by selectively exposing people to positive or negative Deepfakes of Chris, we could control how he was publicly perceived—liked by some and despised by others. Deepfakes also installed automatic and self-reported attitudes that were just as strong as those established by genuine online content. In another set of studies, we simulated an even more realistic scenario and fabricated the fake content entirely from scratch (i.e., had an AI algorithm make Chris say things he had never previously said). This second form of Deepfaking also strongly biased people’s first impressions.

These findings show that Deepfaked videos can influence our perceptions of others. We also examined another question: would exposure to only a Deepfake of a person’s voice be enough to influence the listener’s thoughts and feelings? We fed Chris’ voice to a neural network (AI) to teach it how he speaks and then had it create a Deepfaked voice: a synthetic replica that sounded similar to Chris, and which could be manipulated into saying anything. By cloning Chris’ voice and manipulating what he "said" we once again took control of how he was perceived.

In our final study, we set out to answer one last set of questions: Are people aware that Deepfaking is possible, and can they detect when they are being exposed to such content? Unfortunately, it seems that many people are unaware that Deepfaking is even possible; find it difficult to detect when they are being exposed to it; and neither awareness nor detection protected them from its influence (i.e., even those who were aware of Deepfaking and who detected that they were exposed to one had their attitudes and intentions biased).

Taken together, our work shows that Deepfakes can quickly and powerfully impact viewers, even when they are aware that Deepfaking is possible, and detect that they are being exposed to it. Different types of Deepfaked content (video and audio) and different Deepfake creation methods ("cut and paste" vs. "fabricate from scratch") are all capable of influencing how we perceive others. Although politicians, journalists, academics, and think tanks have all warned of the dangers that this new technology poses, our paper is one of the first to offer systematic empirical support for those claims. Our findings highlight the need to study the Psychology of Deepfakes, and in particular, how this new technology may exploit our cognitive biases, vulnerabilities, and limitations for maladaptive ends.

This blog post is based on the following paper: Hughes, S., Fried, O, Ferguson, M., Hughes, C., Hughes, R., Yao, X., & Hussey, I. (2021). Deepfaked Online Content is Highly Effective in Manipulating People’s Attitudes and Intentions.

References

Kietzmann, J., Lee, L., McCarthy, I., & Kietzmann, T. (2020). Deepfakes: Trick or treat? Business Horizon, 63, 135-146.

Satter, R. (2020). Deepfake used to attack activist couple shows new disinformation frontier. Reuters. https://www.reuters.com/article/us-cyber-deepfake-activist-idUSKCN24G15E

Ajder, H., Patrini, G., Cavalli, F., & Cullen, L. (2019). The state of Deepfakes 2019: Landscape, threats, and impact. Sensity. https://sensity.ai/reports/

Galston, W. (2020). Is seeing still believing? The Deepfake challenge to truth in politics. The Brookings Institution. https://www.brookings.edu/research/is-seeing-still-believing-the-deepfa…

European Commission. (2018). Communication from the Commission - Tackling online disinformation: A European Approach, COM/2018/236 final. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52018DC0236

Burt, T., & Horvitz, E. (2020, September 1). New steps to combat disinformation. Microsoft. https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-dee…

Fried, O., Tewari, A., Zollhöfer, M., Finkelstein, A., Shechtman, E., Goldman, D., Genova, K., Jin, Z., Theobalt, C., & Agrawala, M. (2019). Text-based editing of talking-head video. ACM Transactions on Graphics, 38, 1-14.

Yao, X., Fried, O., Fatahalian, K., & Agrawala, M. (2020). Iterative text-based editing of talking-heads using neural retargeting.https://arxiv.org/abs/2011.10688

advertisement
More from The Learning and Implicit Processes Lab
More from Psychology Today