To the extent that we understand what makes us tick—what it means to be a mind, and how minds work—we do so through the realization that minds are bundles of computations. Paraphrasing Theodosius Dobzhansky's famous quip on the role of evolution in biology, I recently remarked that "nothing about the mind/brain makes sense except in the light of computation" (Trends in Cognitive Sciences, September 2012, Vol. 16, p.447). One implication of minds being what brains compute is that, on the face of it, your mind can be computed by a device other than your brain. It used to be the case that to hope to survive the passing away of your mortal coil, you had to get on the good side of one of the many religions that peddle promises of heavenly afterlife. Does our new understanding of how the mind works mean that we can all hope to live after death in the iCloud?

Before you rush off to reserve computing time on a suitably powerful server, to which they would download you when the time comes to upgrade from the squishy factory-installed computer in your skull, there is a possible snag you should be aware of. It has to do with the one crucially important issue arising from the process whereby brains compute minds that is both less well-understood than others and more central to you being you (rather than someone, or something, completely different). This issue is the computational nature of consciousness.

Popular discussions of consciousness most often focus on its self-referential or "higher-order" aspects (as in having a sense of self, an awareness of perception, or a thought about thinking). The most basic form that consciousness takes is, however, simpler and more immediate than that. It is pure phenomenal awareness—the kind of feeling you get while "losing yourself" in the contemplation of a broad mountain vista, a feeling that seems infinitely rich, yet stops short of any "thoughts" in the usual sense of the word, such as thoughts about the subject of the experience (you) or about various objects that may be present in the scene (boulders, mountaintops, a marmot watching you from behind a tree stump).

Because the capacity for phenomenal awareness evolved along with the other faculties of the mind, we know that it must ultimately be amenable to a computational explanation. Several computational theories of phenomenal awareness have been advanced in recent years; how can we tell if these theories are any good? In addition to the usual general scientific criteria of agreement with available data and of predictive and explanatory power, theories of phenomenal awareness are subject to one absolutely fundamental constraint that is very intuitive and easy to formulate: any explanation of phenomenality must be intrinsic to the system in question. My awareness of the world is most forcefully and inalienably mine; in explaining what it is and how it works, a neuroscientist monitoring my brain cannot appeal to any measurements that my brain does not have access to.

The most straightforward way for a theory of consciousness to satisfy this constraint is to identify a system's phenomenality—its immediate and basic feeling of the richness of the experienced world—with its dynamics, or the succession of the system's representational states, as they unfold through time. As my colleague Tomer Fekete and I pointed out, the dynamics of a system (which can be expressed mathematically as the differential equations that govern its behavior over time) is as intrinsic and complete a specification of the system as can be (T. Fekete and S. Edelman, Towards a computational theory of experience, Consciousness and Cognition, 20:807-827, 2011). Very importantly, dynamics is essentially about time, and so is phenomenal awareness: consciousness frozen in time is, then, a contradiction in terms.

To give rise to phenomenality, it should suffice, then, that the system's dynamics possess the same kind of rich intrinsic structure as the corresponding experience. The dynamical system consisting of some billiard balls on the floor of an otherwise empty room (for instance) does not fit the bill: although we may pretend that the presence of balls in this or that corner of the room represents this or that state of affairs in the world, the difference between the corners is not intrinsic to the system itself (it's up to us). In comparison, in a Rube Goldberg contraption, the billiard balls can only undergo a certain limited set of trajectories in response to being prodded; such a system is, therefore, capable of true, intrinsic discernment between different kinds of proddings. Now, multiply this capacity by a factor of many billions (think of the number of neurons in a brain like ours), and you'll get an inkling of the intrinsic representational—and therefore phenomenal—powers of truly complex dynamical systems.

We can now see why an attempt to "download" a mind into a conventional digital computer by making the computer simulate the workings of the brain's neurons is doomed to fail with respect to the one aspect of that mind that to it matters the most: its phenomenal experience. The problem lies in the fundamental difference between, on the one hand, the intrinsic dynamics of the digital gates that comprise the computer (and the transistors of which they are built), and, on the other hand, the dynamics of the neural circuits that comprise the brain. It is true that a digital computer can simulate any other classical physical system to an arbitrary degree of precision; such a simulation, however, is always a matter of an outside interpretation of its high-level states, which arbitrarily ignores the underlying dynamics of its electronics (for a detailed exposition of this argument, see T. Fekete and S. Edelman, On the (lack of) mental life of some machines, published as chapter 5 of Being in Time: Dynamical Models of Phenomenal Awareness, John Benjamins, 2012).

So, if my mind is somehow seamlessly downloaded into a beefed-up MacBook connected to a robot body, the resulting avatar will respond and act as a humanoid, if at all, only for a very limited time, because its phenomenal experience of the world—dictated by its alien intrinsic dynamics—is going to be strange indeed, and who knows what it would then be up to. Having the right dynamics on the inside is absolutely crucial to feeling human—or any other sentient being—and so of course also to behaving like one.

One of my favorite songs by John Brown's Body, The Grass, has a line in it that would sound familiar to any cognitive scientist: "We are more than the sum of our parts." The suspicion that what is important about a bunch of parts is the way in which they interact dynamically has been around in cognitive science for some time. Douglas Hofstadter, for instance, writing in The Mind's I (1981, p.191), wondered, "Is the soul greater than the hum of its parts?" Our emerging understanding of consciousness affirms JBB's sentiment, while offering closure for Hofstadter's musing: the core of the mind/brain's phenomenal existence is precisely equal to the hum of its parts.

And now the good news, such as it is: if the intrinsic dynamics of the system are all-important, then what the system is made of does not really matter, and so hope still exists that an artificial computational receptacle can be devised that would be more friendly to a human mind than a digital computer.

You are reading

The Happiness of Pursuit

AI We There Yet?

A behavior-science perspective on why it's still a long way to human-like AI

From Beginnings to Hope

The human mind/brain has what it takes to make the world a better place.

How to Make a Corporation Happy

If corporations are people, how likely are they to have human feelings?