Skip to main content

Verified by Psychology Today

Friends

Consciousness: The Final Frontier

Why does being a brain feel like something instead of nothing?

When I walk through my kitchen, I smell fresh food and feel warmth radiating off of it. These internal, subjective experiences are called qualia. I assume that my refrigerator, my coffee mug, and my toaster lack qualia. That is to say, if a magician transformed me into a toaster, my world would turn dark and all experience would fade into oblivion.

Jooyeun Lee/Knowing Neurons
Source: Jooyeun Lee/Knowing Neurons

So why does it feel like something to be a human brain? To be fair, sometimes it feels like nothing to be a human brain (when that brain is under anesthesia, in a coma, or dead). But why does it feel like something the rest of the time (when that brain is alive and awake)? That something is called consciousness: the experience you have when you aren’t in deep sleep, under anesthesia, or in a grave. Consciousness is perhaps the greatest mystery of nature and is challenging to study because it’s an inherently subjective phenomenon. Why does the brain have consciousness? Would it be possible for a brain as complex as ours to evolve without consciousness?

“What is it like to be a bat?” American philosopher Thomas Nagel first asked this question to demonstrate that even if we learn everything there is to know about the brain of a bat, it is still impossible to understand how the bat experiences the world. Like humans, bats are mammals with complex brains that likely support consciousness. And yet, bats use biological sonar, or echolocation, to sense their surroundings by bouncing sound off of solid surfaces.

Does echolocation feel more like hearing or seeing? Or something altogether different? Even if we map the entire brain of the bat, the problem of understanding what echolocation feels like for a bat is similar to the problem of explaining color to a blind person. Understanding these subjective experiences in entirely physical terms is likely impossible.

A slightly more tractable problem might be to determine which systems have consciousness. To tackle this problem, you must assume that other people and things have consciousness. While this seems like a safe (and largely necessary) assumption, it’s still only that — an assumption. How can you prove that you haven’t been dreaming or hallucinating your entire life (and your friends thus mere illusions)? Or, how can you be sure your friends and family aren’t philosophical “zombies,” people who act like you but lack internal experience?

Solipsism, the view that only your mind exists, is an idea few of us choose to accept. For most of us, solipsism is simply not a sane way to live. Instead, we assume that other minds can be inferred from the behavior of other people. Our friends, family, and neighbors behave like us, so we assume that they have consciousness. To a lesser extent, dogs, cats, and other mammals often behave like us (and share similar brains), so we assume they’re conscious to some extent as well.

Building on these assumptions, neuroscientist Giulio Tononi has built a theory that may eventually allow us to quantify the degree to which a brain or a computer is conscious. Integrated information theory, or IIT, postulates that consciousness is information integrated in the brain. It also offers explanations for why we lose consciousness during sleep, anesthesia, and epileptic seizures. Furthermore, it offers an explanation for why the cerebellum, which contains most of the brain’s neurons, can be damaged or even missing with a minimal effect on consciousness.

During sleep, anesthesia, or a seizure, neurons in the brain tend to all fire together simultaneously. This “agreement” reduces the information capacity of the brain, like a book whose letters are all identical or a strand of DNA whose bases are all the same. When we are awake, there is less agreement among neurons. This variety allows the brain to store more information, like a book with many different words or a strand of DNA with many unqiue base pairs.

But just containing information is not enough. The brain’s information must be integrated in a meaningful way — otherwise, consciousness does not result. The cerebral cortex, with both short-range and long-range connections between neurons, is the ideal information integrating machine. Without the cerebral cortex, we cannot have normal, wakeful experience. The cerebellum, on the other hand, contains many isolated chains of neurons with minimal crosstalk. Possibly for this reason, the cerebellum can be absent from birth without seeming to diminish consciousness!

Integrated information theory may one day help us determine if patients in a coma are conscious. It may also help determine if pets, insects, and computers have consciousness. IIT has strong explanatory power, and makes many testable predictions (assuming scientists can overcome a few ethical hurdles). For instance, IIT predicts which brain lesions should affect consciousness and how many fibers connecting the two hemispheres of brain would need to be cut until a person’s consciousness splits in two. The ethical obstacles are clear (any volunteers for those brain lesions?). But neuroscience has found ways around similar challenges before, with workarounds including animal models that closely resemble the human nervous system and techniques such as transcranial magnetic stimulation (TMS) that simulate lesions with magnetic fields.

Understanding consciousness may be the greatest challenge posed to science. While we may never truly understand why brains are conscious or what it feels like to be a bat, future neuroscientists studying consciousness will help humanity push science to new, unrealized limits.

~

This post first appeared on Knowing Neurons.

References

Nagel, Thomas. "What is it like to be a bat?." The philosophical review 83.4 (1974): 435-450.

Tononi, Giulio. "Integrated information theory of consciousness: an updated account." Arch Ital Biol 150.2-3 (2012): 56-90.

Yu, Feng, et al. "A new case of complete primary cerebellar agenesis: clinical and imaging findings in a living patient." Brain (2014): awu239.

advertisement
More from Joel Frohlich Ph.D.
More from Psychology Today