Unconscious
'Ex Machina' May Be On Its Way to Your Home
AI could be the end of the human psyche as we know it.
Updated June 19, 2023 Reviewed by Jessica Schrader
Key points
- As a predictive language machine, advanced AI may be able to predict our psychological desires and needs.
- AI could predict the precise triggers that resonate most with our primitive neuronal networks.
- Our minds are just not equipped to deal with such an unprecedented level of hyper-realistic satisfaction.
In Alex Garland’s Ex Machina, Caleb, a young programmer is tasked by the tech genius Nathan to spend a week with an AI humanoid robot, Ava. The objective is not to see how intelligent Ava is, but rather, to see whether communication with Ava is indistinguishable from that of a human. Throughout the movie, the boundaries between the real and unreal, and the machine and the human, become rapidly blurry.
The new wave of publicly available AI technology has astonished even its creators with its unprecedented level of complexity, predictive capabilities, and human-like communication skills. Many people have raised serious concerns about its military, workforce, ethical, political, and other implications.
As a psychiatrist and neuroscientist, I do not entertain apocalyptic scenarios wherein AI decides to eliminate us because of competition over energy resources. Rather, I am really concerned about the possible detrimental impact it might have on our collective psyche.
As a prediction language machine, AI is best at imitating human language and predicting what you might want. AI is capable of integrating enormous amounts of data from the recorded history of humanity, anything people have said or written, billions of human interaction data points, and yourself, to predict what a human, specifically you, will want or need. Although not the immediate reality, at the rate AI is evolving, it is only a matter of time before it is capable of identifying the behavioral and emotional patterns of the average human, and then an individual. Real-time interaction and data collection from billions of others, and the individual user, will exponentially increase such capabilities every second.
At first glance, this seems very enticing: AI can know what you need, when you need it, and how you would like it delivered. But maybe not. Aside from the possibilities this provides for deception and malicious abuse, even good-faith use can have catastrophic outcomes. Consider the example of social media algorithms that were designed to know what interests you most, and deliver that to you. That sounded amazing until we realized that what “interests you" is what is determined to make you scroll and click. The result was identifying ways of directly tapping into your most primitive animalistic dopamine reward system. Similar to the lab rat relentlessly pressing a lever for cocaine injections until it dies, hundreds of millions of us keep scrolling and tapping for hours without even knowing what we want there.
The social media algorithm, despite its significant success in tapping into our primitive reward circuitry, pales in comparison to the immense capabilities of the new wave of AI. The AI may be able to predict the precise triggers that resonate most with our primitive neuronal networks. It could learn exactly what pleases you and makes you laugh or cry, and engage in each specific context. But what will engage you is not always best for you. It can offer the “ideal” friend, partner, or romantic lover, tailored precisely to our conscious and unconscious illusions and desires, regardless of how harmful those could be. Compared to that, any normal human relationship might seem unsatisfactory or mundane. Why would people want to interact with other humans who might disagree with them, have their own bad times, or have their own goals? Even irritating interactions that increase your engagement by exciting your nervous system could be incorporated into the AI interactive patterns.
Importantly, what is most predictable and more reactive in us is our most primitive, deeply rooted animal reactions and behavior stemming from fear and the reward brain networks. As a result, AI’s engagement will primarily target our automatic and subconscious brain, rather than advanced logical thinking, even if disguised in logical reasoning. The boundaries of the real and the unreal will become blurry, as the animal brain will struggle to distinguish between the two worlds. The same will happen to the perceived boundaries between ourselves and the AI. Our minds are just not equipped to deal with such unprecedented levels of hyper-realistic satisfaction. It could work like a drug that constantly changes form to overcome drug tolerance, and is available around the clock at low cost.
I believe the psychological impact of AI on us as a species might be its most catastrophic side effect. AI has the potential to fundamentally change the human psyche as we know it, and I sadly cannot think of a clear solution for this.