A Critique of Christof Koch's Account of Consciousness

The flaws of Information Integration Theory.

Posted Jun 09, 2020

Neuroscientists who write about consciousness used to be dismissed as undergoing “philosopause,” the abandonment of science for philosophical speculations, such as finding God among the neurons. But Francis Crick made it respectable for biologists to probe the mysteries of consciousness, and his collaborator, Christof Koch, has continued the admirable adventure. 

Koch’s new book The Feeling of Life Itself presents his newest take on consciousness with a defense of Giulio Tononi’s theory that consciousness results from integrated information. The book is well written and full of interesting ideas about brains and consciousness, but the theory has numerous flaws, including the implication that toilets are conscious. 

Information Integration Theory starts with five so-called axioms: that each conscious experience is (1) intrinsically “for itself,” (2) structured into distinct sensory aspects, (3) informationally rich with abundant detail, (4) integrated in not being reducible to its components, and (5) definite in having contents and spatiotemporal properties that exclude other experiences. The last four are not as self-evident as axioms are supposed to be, but they are plausible generalizations about various kinds of consciousness. In contrast, the first is obscure, marked only by physical elements that specify “differences that make a difference” to themselves, which is more a mysterious mantra than an explanation.   

 Wikimedia Commons
Figure 1. Toilet mechanism.
Source: Wikimedia Commons

Koch states that (p. 79) “consciousness is a fundamental property of any mechanism that has cause-effect power upon itself.” This statement is ridiculous because it includes any machine that uses a feedback mechanism, such as the float valve in a toilet. When the toilet flushes, a valve in the water tank refills it with water until a hollow float connected to a lever rises enough to close the valve, as shown in figure 1. The toilet causes itself to stop refilling with water when the float rises high enough to shut off the valve. 

 Paul Thagard
Figure 2. Causal structure.
Source: Paul Thagard

Figure 2 simplifies the causal structure of this apparatus, showing the causal feedback loop involving the water level, the float, and the valve. The toilet does contain some physical information—for example, with the float representing the water level and the valve representing the water flow. You could even say that the apparatus integrates such information. But we have absolutely no reason to suppose that the toilet has even a little bit of consciousness.

In contrast, people are able to report their conscious experiences, and even fish have pain behaviors that suggest they are conscious. In a previous blog post, I argue that consciousness first evolved with simple animals with millions of neurons. 

Koch and Tononi may well bite the bullet and say toilets are, in fact, a bit conscious, as they allow for bacteria and simple logic gates. I think a better strategy is to look elsewhere for a biologically plausible theory of the neural mechanisms that support consciousness. Instead of biological mechanisms, Information Integration Theory offers a mathematical quantity called PHI that is supposed to measure the extent to which a causal mechanism cannot be reduced to its parts.

Unfortunately, computing PHI requires considering all possible mechanisms that could operate in a system, a number that grows exponentially with the size of a system. For example, a neural group with only three neurons has just 23 = 8 ways of combining them, but 100 neurons allow a larger number of possibilities, which is 30 digits long, and calculating the number for billions of neurons would far exhaust the resources and history of the entire universe. Hence the PHI measure is mathematically useless for real systems. It also does not provide any explanation of why conscious experiences such as visual perceptions, pains, emotions, and thoughts are so different from each other. 

Whereas Koch is extravagant in granting consciousness to bacteria and logic gates, he is stingy in insisting that computers cannot be conscious. His argument is based on two examples of currently successful technologies using deep learning networks and reinforcement learning. He says these use feedforward neural networks without the causal feedback loops that he thinks are crucial for consciousness. But many current computers, including smartphones, are full of feedback loops: for example, when Google Maps readjusts your route when GPS shows that you missed a turn. Koch ought to say based on the Information Integration Theory that computers are already conscious. In contrast, I don’t think that any computers are currently conscious, but it is an open question whether advances in computer hardware and software will eventually give them the same causal powers as the brain to have conscious experiences, such as emotions

Fortunately, the inadequacy of the Tononi-Koch theory of consciousness does not require reverting to the traditional idea that consciousness is a property of nonphysical minds. Here are three alternative neural theories of consciousness:

All of these describe biological mechanisms for consciousness without dismissing the possibility of computer consciousness or extending it to toilets.

Programming exercise: Mayner et al. (2018) offer a Python program for calculating PHI. It should be easy to translate the graph in figure 2 as input to this program and calculate the quantity of consciousness in a flush toilet. This calculation will complete my reductio ad effluvium.


Dehaene, S. (2014). Consciousness and the brain: Deciphering how the brain codes our thoughts. New York: Viking.

Graziano, M. S. A. (2017). The attention schema theory: A foundation for engineering artificial consciousness. Frontiers in Robotics and AI, 4. doi:10.3389/frobt.2017.00060

Koch, C. (2019). The feeling of life itself: Why consciousness is widespread but can’t be computed. Cambridge, MA: MIT Press.

Mayner, W. G. P., Marshall, W., Albantakis, L., Findlay, G., Marchman, R., & Tononi, G. (2018). PyPhi: A toolbox for integrated information theory. PLoS Computational Biology, 14(7), e1006343. doi:10.1371/journal.pcbi.1006343

Thagard, P. (2019). Brain-mind: From neurons to consciousness and creativity. New York: Oxford University Press.

Thagard, P., & Stewart, T. C. (2014). Two theories of consciousness:  Semantic pointer competition vs. information integration. Consciousness and Cognition, 30, 73-90.