Skip to main content

Verified by Psychology Today

Consciousness

How Consciousness Is Different From Augmented Reality

Making decisions based on what is in one's conscious field in the present.

Key points

  • Consciousness resembles "augmented reality" in many ways.
  • However, in one way, consciousness is quite different from augmented reality.
  • This difference reveals a peculiar property of the conscious field.
Source: Wikimedia Commons/Public Domain
Smart Helmet
Source: Wikimedia Commons/Public Domain

In the movie Gravity, the astronauts’ futuristic helmets have special, high-tech shades that can display critical information, such as the amount of oxygen left in the oxygen tank. (The helmet has a projector system that projects this information on the shades.) Through the shades of the helmet, the astronaut can see the surrounding environment, as with a traditional helmet, but can also monitor these additional signals about the oxygen tank, the ship's location, etc.

The helmets of today’s fighter pilots, too, can display critical information in this way. These “smart helmets” can display altitude, coordinates, and radar signals on the shades. The pilot’s view is not obstructed: The pilot can see the sky and clouds while monitoring these visual signals. This technology is a simple form of “augmented reality.” While watching Gravity, I thought it would be neat to own such a device and realized that we humans are already equipped with one: our conscious field.

The “conscious field” is composed of everything that one is conscious of at one moment in time. Each thing that one is aware of is called conscious content. At one moment in time, the conscious field can be composed of a medley of conscious contents, such as the visual objects in one’s environment, the smell of lavender, or, in unfortunate cases, a toothache or ringing in the ears. Important signals such as thirst, hunger, and air hunger (when carbon dioxide levels are high) can also occupy the field, as can memories and earworms (songs that one can’t get out of one’s head).

As with the smart helmets, the main goal of the conscious field is for one to perform the most adaptive possible action in light of all that is going on at present (see theoretical account in Morsella et al., 2016). While walking, one is aware of the sidewalk, the trees one passes, and perhaps the urge to sneeze and the memory that one must stop by the store to buy milk. Things that are important for action selection (i.e., what one decides to do) are usually represented in the conscious field, and they are represented in a way that normally leads to adaptive action. For example, surfaces of very high kinetic energy are perceived as “painfully hot” and are avoided. Ripe bananas look different from unripened ones, so one picks only the former. Foods of high caloric value tend to taste good, so one desires them (see discussion here).

But the conscious field is different from the smart helmet in at least one important way.

If one could monitor from afar (through telemetry) what was projected onto a pilot’s shades, one would not be able to predict always what the pilot chooses to do, as the actions of the pilot could be based on information that is outside of the “smart helmet” system. For example, if the pilot felt dizzy and thought it best to return to base, the cause of the pilot’s decision would not be represented in the helmet. Through telemetry, an observer would not know why the pilot was returning to base because nothing displayed in the helmet would predict the change in flight plans. Dizziness is not represented by any signal on the shades of the helmet. The pilot’s decisions depend on what is in the display as well as on information that is, in a sense, outside of the display system (e.g., dizziness, nausea, or vertigo).

In contrast, the contents of the conscious field wholly and exclusively determine voluntary action selection (Morsella et al., 2016). Voluntary action selection is not based on what occupies the conscious field plus information that exists in some other system. The way the machinery appears to work (see Morsella et al., 2016) is such that the contents of the conscious field at one moment wholly and exclusively determine action selection. (Of course, there are unconscious processes that can influence behavior directly and are responsible for constructing the conscious field, but these processes are not part of voluntary action selection.)

Hypothetically, knowledge of all the contents of the conscious field at one moment in time would predict what the actor will decide to do: that is, it would predict the nature of voluntary action selection. Unlike with the smart helmet, there is no extra system outside of the field that directs voluntary action selection.

When the conscious field is not operating properly, action selection suffers.

For example, if the action-relevant conscious contents are not presented in the field, action selection will still arise, but the resulting behavior will not be influenced by all the kinds of information by which it should be influenced. These types of non-adaptive behaviors are obvious in neurological conditions in which actions are somehow decoupled from consciousness. In such situations, there is no independent system or repository of knowledge that can step in to fill the role of the missing contents.

For example, the decision-making and behavior of someone who is no longer aware of smells (as occurs in anosmia) will not reflect that there is a gas leak. The “smart helmet” of the conscious field simply does not represent a “gas leak.” Because the “absence of information” is different from “information about absence,” patients are often unaware that critical information is absent from the conscious field, just as one is unaware of the blind spot in vision.

Nevertheless, even when the conscious field is constructed poorly and operating abnormally, action selection must proceed based on the contents that happen to occupy the conscious field at that moment. Several times a second, these conscious contents and nothing else are generating voluntary action selection. This is very different from the smart helmet in Gravity. The signals in the helmet inform decision-making, but they do not determine it.

References

Morsella, E., Godwin, C. A., Jantz, T. K., Krieger, S. C., & Gazzaley, A. (2016). Homing in on consciousness in the nervous system: An action-based synthesis. Behavioral and Brain Sciences [Target Article], 39, 1-17.

advertisement
More from Ezequiel Morsella Ph.D.
More from Psychology Today