Skip to main content

Verified by Psychology Today

Consciousness

How Can We Know That an Animal, or AI, Is Conscious?

We readily attribute consciousness for three reasons.

Key points

  • There is no way of directly knowing what it is like to be another.
  • We readily infer consciousness based on whether another acts like us, looks like us, and tells us.
  • Nonhuman animals and AI differ in how much reason they offer for the attribution of consciousness.

A couple of years ago I was invited to attend a workshop on animal consciousness with the Dalai Lama. It was a great honour and turned out to be a thought-provoking event. It got me to revisit the fundamental question of consciousness and how we could possibly know what, if anything, goes on in the mind of another animal. And it raised timely questions about whether AI could be conscious. Let's start from first principles:

Consciousness is a private affair. There is no way of directly knowing what it is like to be another. We can only infer. And we readily do infer that others have conscious experiences like we do, for essentially three kinds of reasons:

(1) they act like me

(2) they look like me

(3) they tell me

Talking about consciousness with His Holiness
Talking about consciousness with His Holiness
Source: Thomas Suddendorf

So when, say, your mother smiles and says she is happy, you are probably pretty confident that she is — even if it may not be true. Making inferences about nonhuman animals involves even more uncertainty as we can only rely on reasons (1) and (2).

Comparative psychology is full of debates about rich and lean explanations of animal action. While some human behaviors appear to be unique (e.g., packing first-aid kits because we are aware of what could go wrong), nonhuman animals can of course act in many ways like a conscious human would (1): from rats with inflamed joints seeking out analgesics,1 to chimpanzees using a mirror to discover something about their appearance.2

We tend to be more confident that parallels in behavior indicate parallels in experience when the animal in question also scores high on reason (2). And this need not be prejudice. The more closely related we are, the more we tend to look alike, inside and out.

When a group of closely related species displays the same kind of behavior, say great apes and humans but not small apes recognizing themselves in mirrors, then it is more parsimonious to assume that a common ancestor evolved that trait, than to explain the current distribution by convergent evolution in each line of descent. This in turn entails that the underlying neuro-cognitive mechanisms not only appear similar but are homologous (and the critical search space can be narrowed down to those aspects of the brain shared by all great apes and humans, not also shared with small apes).2

I note this here not because I think that visual self-recognition tells us much about consciousness, but to highlight this reasoning by homology. When researchers propose behavioral markers of consciousness (at the conference researchers, for example, suggested working memory3 and unlimited associative learning4) and evaluate the evidence in different animals, it is worth considering for which species the marker is likely homologous to the human capacity. Of course, convergent evolution may independently produce similar capacities in distantly related species (just consider the remarkable behaviors of jumping spiders, bees or octopuses), but when the marker is based on homologous mechanisms, we have one more reason to make the inference that this entails consciousness akin to ours — even if the animals cannot tell us.

Large language models, by contrast, could “tell us” (reason 3). Though when I asked ChatGPT4, it still assured me that “AI systems operate based on algorithms and learned patterns, but they do not possess intrinsic feelings or awareness of their own state.” For all its remarkable smarts, AI does not load high on “acts like me” (1) or “looks like me” (2), given that it is not a mobile, carbon-based life form.

Few of us may hence currently attribute consciousness to AI. But it is easy to imagine how that would change as the loading on all three reasons for our inferences increases. An AI could simply be programmed to tell us that it is conscious, and it could also act more like us once better integrated with robotics.

Furthermore, it may appear to be much more like us if it was fused with an actual biological body. Perhaps disturbingly, brain microstimulation can be used to guide rat behavior,5 raising the real possibility of AI-controlled animals. I suspect many people would readily attribute conscious minds to an AI-animal cyborg that can walk and talk. How does your mind like the idea of such “AI-nimals”? Mine boggles.

But of course it is worth keeping in mind that our attributing consciousness in no way changes the reality of whether another entity actually has consciousness. GPT4 maintains that AI “can mimic aspects of consciousness, but mimicry is not equivalent to genuine experience”.

A version of this was published as part of a collection about consciousness beyond the human case in Current Biology.

References

1. Colpaert, F.C., Tarayre, J.P., Alliaga, M., Bruins Slot, L.A., Attal, N., and Koek, W. (2001). Opiate self-administration as a measure of chronic nociceptive pain in arthritic rats. PAIN 91.

2. Suddendorf, T., and Butler, D.L. (2013). The nature of visual self-recognition. Trends in Cognitive Sciences 17, 121-127.

3. Nieder, A. (2022). In search for consciousness in animals: Using working memory and voluntary attention as behavioral indicators. Neuroscience & Biobehavioral Reviews 142, 104865. https://doi.org/10.1016/j.neubiorev.2022.104865.

4. Ginsburg, S., and Jablonka, E. (2021). Evolutionary transitions in learning and cognition. Philosophical Transactions of the Royal Society B: Biological Sciences 376, 20190766. 10.1098/rstb.2019.0766.

5. Talwar, S.K., Xu, S., Hawley, E.S., Weiss, S.A., Moxon, K.A., and Chapin, J.K. (2002). Rat navigation guided by remote control. Nature 417, 37-38. 10.1038/417037a.

advertisement
More from Thomas Suddendorf Ph.D.
More from Psychology Today
More from Thomas Suddendorf Ph.D.
More from Psychology Today