Skip to main content

Verified by Psychology Today

Artificial Intelligence

What a Mysterious Chinese Room Can Tell Us About Consciousness

How a simple thought experiment changed our views on computer and AI sentience.

Key points

  • The Chinese room argument is a thought experiment by the American philosopher John Searle.
  • It has been used to argue against sentience by computers and machines.
  • While objections have been raised, it remains one an influential way to think about AI and cognition.
  • Consciousness is mysterious, but computers don’t need to be sentient to produce meaningful language outputs.

Imagine you were locked inside a room full of drawers that are stacked with papers containing strange and enigmatic symbols. In the centre of the room is a table with a massive instruction manual in plain English that you can easily read.

Although the door is locked, there is a small slit with a brass letterbox flap on it. Through it, you receive messages with the same enigmatic symbols that are in the room. You can find the symbols for each message you receive in the enormous instruction manual, which then tells you exactly which paper to pick from the drawers and send out through the letterbox as a response.

Leon Gao | Unsplash
The Chinese Room is one of the most important philosophical thought experiments on consciousness and has influenced how AI and machine sentience is viewed.
Leon Gao | Unsplash

Unbeknownst to the person trapped inside the room, the enigmatic symbols are actually Chinese characters. The person inside has unknowingly held a coherent conversation with people outside simply by following the instruction manual but without understanding anything or even being aware of anything other than messages being passed in and out.

Franks Valli | Wikimedia Commons
John Searle in 2015: One of the most influential contemporary philosophers of mind but has also become controversial (see footnote 1).
Franks Valli | Wikimedia Commons

This story was conceived by the American philosopher John Searle1 in 1980 and has become one of the most influential and most cited papers in the cognitive sciences and the philosophy of mind with huge implications for how we see computers, artificial intelligence (AI), and machine sentience (Cole, 2023).

Searle (1980) used this thought experiment to argue that computer programs—which also manipulate symbols according to set rules—do not truly understand language or require any form of consciousness even when giving responses comparable to those of humans.

Is AI Sentient?

A Google engineer made headlines in 2022 by claiming that the AI program he was working on was sentient and alive (Tiku, 2022). The recent advance of language-based AI, like ChatGPT, has made many people interact with it just as they would with real people (see "Why Does ChatGPT Feel So Human?").

It is not surprising then, that many users truly believe that AI has become sentient (Davalos & Lanxon, 2023). However, most experts don’t think that AI is conscious (Davalos & Lanxon, 2023; Pang, 2023a), not least because of the influence of Searle’s Chinese room argument.

Consciousness is a difficult concept that is hard to define and fully understand (see "What is Consciousness?" and "The Many Dimensions of Consciousness"; Pang, 2023b; Pang, 2023c). AI programs like ChatGPT employ large language models (LLM) that use statistical analyses of billions of sentences written by humans to create outputs based on predictive probabilities (Pang, 2023a). In this sense, it is a purely mathematical approach based on a huge amount of data.

While this is a tremendous achievement and a hugely complex task, in its essence, AI follows instructions to create an output based on an input, just like the person stuck in the Chinese room thought experiment. Sentience is not required to have sophisticated outputs or even to pass the Turing test—where a human evaluator cannot tell the difference between communicating with a machine or with another human (Oppy & Dowe, 2021).

Joshua Woroniecki | Unsplash
There is no evidence that AI is sentient. However, even if it was, it may not be able to communicate directly with us and may not understand its own language model.
Joshua Woroniecki | Unsplash

But there is another more troubling implication of Searle’s thought experiment: There is a conscious human being inside the Chinese room who is completely unaware of the communications going on in Chinese. Although we have no evidence suggesting that AI is conscious, let’s assume for a moment that it was: The conscious part is unlikely to understand its own language model and while sentient, may have no idea about the meaning of its own language-based output—just like the person inside the Chinese room.

If AI was conscious, it may be suffering from a kind of locked-in syndrome (see "The Mysteries of a Mind Locked Inside an Unresponsive Body"; Pang, 2023c). It is not clear if this barrier could ever be overcome.

Another implication of the Chinese room argument is that language production does not necessarily have to be linked to consciousness. This is not just true for machines but also for humans: Not everything people say or do is done consciously.

Objections

Searle’s influential essay has not been without its critics. In fact, it had an extremely hostile reception after its initial publication, with 27 simultaneously published responses that wavered between antagonistic and rude (Searle, 2009). Everyone seemed to agree that the argument was wrong but there was no clear consensus on why it was wrong (Searle, 2009).

While the initial responses may have been reactionary and emotional, new discussions have appeared constantly over the past four decades since its publication. The most cogent response is that while no individual component inside the room understands Chinese, the system as a whole does (Block, 1981; Cole, 2023). Searle responded that the person could theoretically memorize the instructions and thus, embody the whole system while still not being able to understand Chinese (Cole, 2023). Another possible response is that understanding is fed into the system through the person (or entity) that wrote the instruction manual, which is now detached from the system.

Another objection is that AI is no longer just following instructions but is self-learning (LeCun et al., 2015). Moreover, when AI is embodied as a robot, the system could ground bodily regulation, emotion, and feelings just like humans (Ziemke, 2016). The problem is that we still don’t understand how consciousness works in humans and it is not clear why having a body or a self-learning software would suddenly generate conscious awareness.

Many other replies and counterarguments have been proposed. While still controversial, the Chinese room argument has been and still is hugely influential in the cognitive sciences, AI studies, and the philosophy of mind.

References

1 John Searle is one of the most influential contemporary philosophers of mind. His stellar academic career at Oxford and UC Berkeley has been tainted by allegations of sexual assault: A lawsuit filed in 2019 reached a confidential settlement and an internal investigation by UC Berkeley resulted in his emeritus status being revoked (Atkins, 2018; Weinberg, 2019).

Atkins, D. (2018, October 16). Berkeley Prof can't avoid harassment settlement, judge told. Law 360. https://jacksonkernion.com/files/Law360%20Article.pdf

Block, N. (1981). Psychologism and Behaviorism. Philosophical Review, 90(1), 5-43. https://doi.org/10.2307/2184371

Cole, D. (2023). The Chinese Room Argument. In E. N. Zalta & U. Nodelman (Eds.) Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/chinese-room/

Davalos, J., & Lanxon, N. (2023, April 19). AI isn’t sentient. Blame its creators for making people think it is. Bloomberg. https://www.bloomberg.com/news/newsletters/2023-04-19/ai-sentience-debate-chatgpt-highlights-risks-of-humanizing-chatbots

LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. https://doi.org/10.1038/nature14539

Oppy, G., & Dowe, D. (2021). The Turing Test. Standford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/turing-test/

Pang, D. K. F. (2023a). Why does ChatGPT feel so human? Psychology Today. https://www.psychologytoday.com/intl/blog/consciousness-and-beyond/202305/why-does-chatgpt-feel-so-human

Pang, D. K. F. (2023b). What is consciousness? Psychology Today. https://www.psychologytoday.com/intl/blog/consciousness-and-beyond/202305/what-is-consciousness

Pang, D. K. F. (2023c). The many dimensions of consciousness. Psychology Today. https://www.psychologytoday.com/intl/blog/consciousness-and-beyond/202305/the-many-dimensions-of-consciousness

Pang, D. K. F. (2023d). The mysteries of a mind locked inside an unresponsive body. Psychology Today. https://www.psychologytoday.com/intl/blog/consciousness-and-beyond/202307/the-mysteries-of-a-mind-locked-inside-an-unresponsive-body

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417-457. https://doi.org/10.1017/S0140525X00005756

Searle, J. R. (2009). Chinese Room argument. Scholarpedia, 4(8), 3100. http://dx.doi.org/10.4249/scholarpedia.3100

Tiku, N. (2022, June 11). The Google engineer who thinks the company’s AI has come to life. The Washington Post. https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

Weinberg, J. (2019, June 21). Searle found to have violated sexual harassment policies (Updated with further details and statement from Berkeley). Daily Nous. https://dailynous.com/2019/06/21/searle-found-violated-sexual-harassment-policies/

Ziemke, T. (2016). The body of knowledge: On the role of the living body in grounding embodied cognition. Biosystems, 148, 4-11. https://doi.org/10.1016/j.biosystems.2016.08.005

advertisement
More from Damian K. F. Pang M.Sc.
More from Psychology Today