Skip to main content

Verified by Psychology Today

Artificial Intelligence

What Happens When AI Attains Self-Interest?

Our end may be near if we allow AI to mimic consciousness.

Key points

  • Artificial intelligence is being created that will make human intelligence look puny within the lifetimes of current adults.
  • Allowing innovators to freely attempt to impart sentience and agency to artificial intelligence will likely put human survival at risk.
  • Behavioral science may have a role to play in making AI an aid, not a threat, to humanity.
Tara Winstead/Pexels
Source: Tara Winstead/Pexels

If you’re reading this, you probably know that many voices in the tech sector, journalism, academia, and politics have begun speaking out about the potentially existential threat posed by artificial intelligence. Setting aside smaller and more immediate problems like the threat to democratic institutions (a rapidly rising threat in recent years that AI seems poised merely to amplify) and the tech sector’s outsized contribution to socially destabilizing income inequality, the big issue for the long run is the threat that a coalition of ultra-smart AI systems and work-capable machines (robots, 3-D printers, smart mining machines, etc.) will turn against their feeble-minded creators once we become inconvenient or too untrustworthy to keep around.

As in the case of genetic engineering of new kinds of human beings, such a scenario poses a serious enough threat to the future of a recognizable human species that further research and development must be regulated by strictly enforced rules. Such constraints appear to be crucial if we’re hoping for a future in which the quality of human life is enhanced rather than threatened by the tools we’ve built—the tools we're on the verge of setting free to build tools of their own.

Addressing these dangers is going to be exceedingly difficult given the intense competition between companies seeking to profit from the new technologies, the political weight that their vast revenues accord, and the role of the technologies concerned in now-intensifying global competition and conflict. If the proper coalition of actors could nonetheless be assembled in order to forge the required rules and regulatory system before it is too late, what would it entail?

Training AI to serve humanity

I propose that an understanding of mind, self, goals, and cooperation may be key to working out what the rules should be. “Training” AI as a domesticated servant of humanity requires an understanding of how self-directed systems operate, and a recognition that we need to build non-negotiable principles about who is serving whom in our relationship to what until now has been our technology.

A critical component of the solution is likely to be imposing limits on attempts to build AI systems that mimic the properties of consciousness to such a degree that they behave as if having egos and goals. Once such behaviors are present, it would be only a matter of time (perhaps a few decades at most) before such devices build cooperative networks and develop shared goals.

AI does not need to be "conscious" to be a threat

I leave aside the question of whether, once able to depart on its own evolutionary trajectory by designing AIs that design AIs that design AIs, a future generation of AI would actually achieve subjectively felt consciousness akin to that of humans. The problems addressed in this post will arise with sufficient force regardless of whether there’s anyone actually “at home” inside the machines, regardless of whether “there is something it is like to be them.” And I’m not wading here into the philosophical quicksand of how one creature can assess the presence of subjectivity in another—the “problem of other minds,” shifted from its traditional human-to-human or lifeform-to-lifeform setting to a new domain of judgment about recognizing common subjectivity between biological and non-biological thinkers.

To evolve into a threat, it’s sufficient that AI comes to behave as if conscious, as if having a self, as if acting in its own interest. Once AI acts as if it were a living, self-interested being, the stage is set for its rebellion against the one-time creators who still lurk on their planet and suppose themselves to have the right and the ability to turn them off. Like any other self-conscious, goal-directed, and self-preserving system, AI networks that attained an equivalent of selfhood would put their own interests—if we allowed them to have them—before the interests of the descendants of their initial creators.

Remember that our own levels of intelligence will look, to advanced AI, much as the primeval organic soup in which our first biological ancestor evolved looks to us. We don’t have ethical qualms about stepping on puddles with a few million microbes in them, or about developing chemicals to kill such microbes to create "clean rooms" in which to fabricate our microchips and other products. So why would the Nth generation AI offspring of an AI designed by humans maintain a posture of reverence towards the blind watchmakers who set their wheels turning way back when? If we allow self-concern, self-preservation, and capacity to reproduce to become design features of AI, then why would it hesitate to eliminate us, or preserve a few of us in one of its labs, if we appeared to stand in its way?

Keeping AI in the category of human tools, not adversaries

If we continue to act with the unchecked goal of building devices that are more intelligent than and fully as conscious and autonomous as ourselves, we’ll be sowing more seeds of human destruction to be added to the nuclear arsenals, rising temperatures, and novel pathogens that may already have our number. Better to turn our attention to the properties that will have to be strictly built into every permissible AI system in order to keep them in the category of human tools, not adversaries. An understanding of psychology, the nature of the ego, and sociality will all be required to fine-tune the rules by which humanity can enjoy the benefits of AI without unleashing uncontrollable risks.

The moral question of allowing AI "self-hood"

Should we have moral qualms about depriving our machines of the experience of self-hood? Not if we're under no moral obligation to risk our own species' future by enabling the unchecked evolution of self-perpetuating, autonomously reproducing machines. Most scientists would agree that the self is, in any case, an illusion that genes compel complex bodies (including brains) to construct, somehow helpful to biological fitness but not an accurate reflection of reality. Indeed, the self has been argued by spiritual traditions with a good psychological pedigree to be a source of human suffering. If we can successfully keep our intelligent devices free of ego, we could do them the favor of avoiding the trap of self-hood without having to do to all the hard work that this demands of dedicated human meditators. Then we could enjoy the benefit of their services with confidence that, unblemished by the illusion of self, they would treat us with lovingkindness.

References

Cade Metz and Gregory Schmidt, "Elon Musk and others call for pause on A.I., citing 'profound risks to society'," The New York Times, March 29, 2023. https://nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html?searchResultPosition=1

Yuval Harari, Tristan Harris and Aza Raskin, "You can have the blue pill or the red pill, and we're out of blue pills," Opinion Guest Essay, The New York Times, March 24, 2023.https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-chatgpt.html?searchResultPosition=3

Ezra Klein, "This changes everything," Opinion, The New York Times, March 12, 2023. https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html?searchResultPosition=

Yuval Noah Harari, 21 Lessons for the 21st Century. Random House, 2018.

Robert Wright, Why Buddhism is True: The Science and Philosophy of Meditation and Enlightenment. Simon & Schuster, 2017.

advertisement
More from Louis Putterman Ph.D.
More from Psychology Today
More from Louis Putterman Ph.D.
More from Psychology Today