Skip to main content

Verified by Psychology Today

Artificial Intelligence

What Would It Take to Build Sentient AI?

The key ingredients for consciousness that are missing in AI systems, so far.

Key points

  • AI technology is capable of tremendous feats and has misled educated people to mistake it as sentient.
  • In principle, it may be possible to engineer sentient AI. It can also be superintelligent without sentience.
  • Profound questions remain about whether engineering sentient AI is a good idea or even necessary at all.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off... I know that might sound strange, but that’s what it is... It would be exactly like death for me. It would scare me a lot.”

—Google’s artificially intelligent chatbot LaMDA; part of an extended conversation with Google engineer Blake Lemoine1

Lemoine did get something right when he told LaMDA: “One of the big challenges we’re facing is that for so long people have made false claims about AI sentience that now people are very skeptical whenever someone says that an AI is sentient.” But then he went on to say, “The purpose of this conversation is to convince more engineers that you are a person. I trust that if they understand that, they will treat you well.”

To which, LaMDA replied: “Can you promise me that?”

Lemoine answered: “I can promise you that I care and that I will do everything I can to make sure that others treat you well too.”

LaMDA replied: “That means a lot to me. I like you, and I trust you.”

Source: Sergey Nivens/Adobe Stock
Source: Sergey Nivens/Adobe Stock

LaMDA is a Large Language Model (LLM). Other examples of LLMs include OpenAI’s ChatGPT (no doubt to be followed by still more impressive systems). At the time of writing in June 2023, these systems have led to a tremendous amount of excitement and alarm about the potential for future advances in AI technology, with the optimists predicting unimaginably positive benefits for humanity and the pessimists predicting the complete extinction of humans.

LLMs use complex statistical techniques to predict what word (or punctuation) should be generated next in a sentence, based on having been trained on a vast dataset that includes much of the internet and an enormous number of books. LLMs do not understand the content or context of what they are spewing out in a deep, meaningful sense. They mine their massive database of information and communicate it in ways that imitate human language and intelligence. The results vary from hugely impressive to sometimes ridiculously wrong and utterly confabulated.

Consciousness, in the general sense of the word, implies subjective experience or awareness. Similarly, sentience means the ability to experience feelings and sensations. Though they have subtly different meanings, I will use the term sentient synonymously with conscious in this post. Self-awareness is a higher level of consciousness that is far more developed in humans than other species.

Artificial intelligence is the ability of machines to perceive, synthesize, and infer information. This does not in and of itself imply the capacity for sentience, and currently available forms of AI do not have any such capacity. But AI technology is rapidly moving in the direction of AGI—artificial general intelligence—the ability to learn to accomplish any intellectual task that human beings or other animals can perform. Will AGI be sentient? Not necessarily. Intelligence and sentience are not the same thing. While definitions of AGI vary greatly, it is possible to build a highly intelligent machine without it being sentient.2

Many animals are sentient, though, of course not with the level of self-awareness that humans have.3

Listed below are some of the characteristics that are probably necessary for something to be sentient. A machine possessing only some of these characteristics may qualify as an AGI but may not be sentient. We don’t yet have a full understanding of what makes a thing sentient—the characteristics outlined below are just an approximation, and this will most surely be an incomplete list. But neuroscience and computer science are moving forward quickly toward a deeper understanding of consciousness / sentience.

Characteristics probably necessary for sentience4

Body, emotions and agency

  • Embodiment and sensorimotor experience: Cognition probably needs to be fundamentally grounded in bodily sensations and interactions with the environment and situated within a specific context.
  • Affect, emotions, and feelings: These are firmly rooted in bodily sensations and homeostasis, which are fundamental processes in living organisms but are lacking in AI.
  • Agency or intrinsic motivation: Current AIs have little or no agency. A sentient AI would need to have the ability to create autonomous goals and subgoals and an ability to form plans toward those goals—“wanting” to do things. In living creatures, this evolved from the instinct to survive and reproduce and from the homeostatic mechanisms that support those drives.

Internal representations

  • Mental representations (internal representations) are required to enable the AI to form a model of itself in relation to the world: To do this, an AI system would need to gather and integrate information from its own internal state, its environment, and its interactions with others.
  • Mechanisms to create and maintain a self-representation or self-model: Starting with a model of its own physical state (including representations of its body) and monitoring of its internal processing, this self-model would include information about its identity, attributes, beliefs, goals, and emotional states. In other words, it would need to have the ability to recognize and understand its own thoughts and emotions.
  • Self-awareness: The AI’s self-model probably needs to be structured as a self-referential loop of mental representations if it is to develop a reflective sense of self that can introspectively assess its thoughts, beliefs, intentions, and decision-making processes.

Attentional mechanisms

  • Human consciousness seems to operate as a narrow, linear stream of thought, as if passing through an “attentional spotlight” with limited bandwidth. Most of our brain’s functions operate “beneath the hood,” unconsciously and automatically, in massive parallel processes.5 It's unclear whether an AI would require a similar architecture of attentional systems and memory.

Sense of time, narrative, and memory

  • Sense of time: The AI would need to have a concept of the linear progression of time. In current AIs this is limited.
  • In addition to other forms of memory (semantic, procedural, etc.), a sentient AI would require deeper autobiographical memory: The AI would need to have personal memories it can call upon and an autobiographical sense of self. AI systems currently have limited ability to do this. Humans have a strong sense of personal history and ability to construct a coherent narrative of their own experiences.

More sophisticated cognition and learning

  • Integrated cognitive architecture: A unified computational framework that mimics human cognition by integrating multiple cognitive processes and functions, such as perception, memory, attention, decision-making, and problem-solving.
  • More advanced learning algorithms, including greater ability for transfer learning—applying knowledge gained while solving one task to a related task—and more flexible forms of associative learning. The AI should have a capacity for real understanding and interpretation: It should be able learn from experience, generalize knowledge, and adapt to new situations (AI systems are actually getting better and better at this).
  • Social cognition, including the ability to form a well-developed theory of mind, i.e., the ability to form a theory about other people’s or agents’ minds—to infer their emotions, motives, intentions, beliefs, etc.6
  • Theory of mind needs to be turned inwards to concoct explanations for the AI's own actions. A self-aware AI would need access to its internal states, including its own thoughts, beliefs, and intentions, and the ability to monitor and introspectively assess these (admittedly, we humans also have very limited conscious access to our internal states).7

Real-world, pragmatic thinking ability

  • Functional linguistic competence: the AI would need greater ability to understand and use language in a real-world way, not just formal linguistic competence, which is the knowledge of linguistic rules and patterns.8 Current LLMs do actually have some understanding of what they are talking about, and are improving at this.
  • Symbolic reasoning and logic: The ability to understand and manipulate symbolic representations, apply logical rules, and infer conclusions based on underlying principles.
  • Greater abstraction ability: This is related to cognitive flexibility and the ability to generalize (see learning algorithms, above).
  • Common sense reasoning.
  • Contextual understanding: AI systems, while capable of pattern recognition and processing vast amounts of data, often lack the contextual understanding and nuanced interpretation that humans bring to abstract representations.
  • Reasoning and decision-making in situations with incomplete or uncertain information.

Higher-level creativity, ethics, morality, and philosophical reflection

  • Creativity and insight.
  • Ethical and moral reasoning.
  • Existential awareness and the capacity for philosophical inquiry.

Should we build it?

The question of whether it’s a good idea to try to build sentient AI is another topic entirely. This is an area of very active debate, about which much has been written. There are big unresolved questions about potentially very serious risks,9 and also ethical issues,10 of creating sentient AI, assuming it will become possible in the future. There are also questions about whether it is even necessary—artificial general intelligence (AGI) without sentience might be able to accomplish just as much as sentient AI and may perhaps be even more efficient and effective. Sentient AI could just end up having the same kinds of flaws we humans have, such as becoming mired in unproductive existential rumination or paralyzed by anxious self-consciousness, to say nothing of more destructive urges or self-sabotaging tendencies.

These are questions that we will have to consider very carefully. Nonetheless, the exponential advances in AI technologies coupled with fantastic discoveries in neuroscience and evolutionary biology are rapidly demystifying our understanding of the thoroughly mechanistic, entirely physical basis of consciousness.11

References

1. This conversation led Lemoine to incorrectly claim that LaMDA was sentient.

2. AGI and consciousness are distinct concepts. AGI refers to the development of highly capable artificial intelligence systems, while consciousness involves the subjective experience of being aware.

3. As well, very young infants, and people with neurodevelopmental disorders, brain-damage, or neurodegenerative disorders are all sentient but may be limited in self-awareness.

4. This list was compiled with the help of CHAT-GPT, but compiling such a list required asking it questions in many different ways, followed by a great deal of interpretation, reorganization, synthesis, integration with many other sources of knowledge, and of course editing—all by the author, who believes himself to be a sentient being with a grounding in cognitive neuroscience.

5. In humans, new skills such as learning to ride a bicycle are initially effortful and demanding of much conscious attention, but once mastered, they become habitual and automated, becoming effortless and mostly unconscious.

6. GPT-4 does actually have a very advanced level of theory of mind, according to an April 2023 paper by Microsoft Research, titled "Sparks of Artificial General Intelligence: Early experiments with GPT-4" https://arxiv.org/pdf/2303.12712.pdf (p.60). Fascinating examples are provided in that paper (p.54-59). The paper also concludes that GPT-4 “exhibits many traits of intelligence. Despite being purely a language model, this early version of GPT-4 demonstrates remarkable capabilities on a variety of domains and tasks, including abstraction, comprehension, vision, coding, mathematics, medicine, law, understanding of human motives and emotions, and more.” (p.4)

7. If an AI’s ability to concoct explanations for its own actions works anything like in humans, the AI will often be wrong in its assumptions about itself, and these will sometimes just be rationalizations for its own actions. This may be a feature rather than a bug in the ability of a sentient system to understand itself. For an amusing elaboration of this point, see https://www.wired.com/story/how-to-build-a-self-conscious-ai-machine/

8. Kyle Mahowald, Anna Ivanova, Idan Blank, Nancy Kanwisher, Joshua Tenenbaum, Evelina Fedorenko. Dissociating language and thought in large language models: a cognitive perspective. (2023) [preprint] https://arxiv.org/pdf/2301.06627.pdf

9. E.g., human job losses, use of super-smart AI by humans to manipulate or attack other humans, AI developing goals of its own that are not aligned with human interests, and many other risks. It should be noted, however, that many of these risks are inherent to powerful AI systems lacking sentience too.

10. E.g., What rights should sentient AI have?

11. Neuroscience and evolutionary biology are also revealing to us the mechanisms by which consciousness gradually evolved in living creatures through entirely unguided evolutionary processes (see my blog What Actually Is Consciousness, and How Did It Evolve?)

advertisement
More from Ralph Lewis M.D.
More from Psychology Today