We mentioned several reasons that justify the dissociation between consciousness and attention, including their evolution (Haladjian and Montemayor, 2015). Our preferred way to visualize the degree of the consciousness and attention dissociation, or CAD, is to see it as a spectrum. At one end of the spectrum, there are identity views that claim that consciousness is identical to attention. At the other end, there are complete dissociation views that claim that consciousness and attention are wholly independent processes. These views can be explained in different ways, and while they have some advantages, we argue that they are too extreme to explain the intricacies of consciousness and attention (see Montemayor and Haladjian, 2015).
In between these opposing views, one finds a vast array of possible views about the interaction between consciousness and attention. We schematize some possibilities based on whether or not attention is necessary for consciousness, which is an approach many favor. Attention may not be sufficient for consciousness, and thus one can justify the view that there are unconscious forms of perceptual attention. A lot of nuance is added to this spectrum if one considers that there might be different forms of conscious awareness (Kriegel, 2015), and attention may be necessary for only some of them, which increases the possibilities of how to model CAD. In a previous post on dreams, we mentioned the possibility that dream awareness may differ from wakeful awareness and lucid dreaming in ways that involve two types of attention—one focused on immediate experiences and another focused on the awareness that one is dreaming. This is why we prefer to visualize CAD as a spectrum that allows for such possibilities of interaction and independence.
But other ways of visualizing the relationship between consciousness and attention are possible. Although they are not fully compatible with the spectrum view, they are still best understood as falling under some section of the CAD spectrum. Perhaps the most important and simplest of these views is some kind of “part-hood” view. According to one version of the part-hood view, consciousness is a vast set of psychological phenomena, only some of which include attention. Here, attention is a subset of conscious phenomena, and it depends on some kind of conscious awareness for its existence. Notice that this view is compatible with a theory of consciousness that allows for various forms of consciousness, only some of which involve attention—a view that can be accommodated by the CAD spectrum. For instance, some forms of consciousness may be considered more primitive or fundamental, and only the more “perceptual” forms of consciousness may require attention.
The opposite view is also conceivable: attention is now the more comprehensive phenomenon, and only some forms of attention involve consciousness. This view has the appealing implication that it allows for a variety of unconscious forms of attention—a claim that is compatible with experimental findings. What could determine the part of attention that is conscious? Working memory seems to be the best candidate. In fact, some “global workspace” theories of consciousness can be understood in this way. Attention is the fundamental phenomenon, and only a specific type of cross-modal, reentrant, and globally activated attention is conscious (Graziano’s schema theory seems to fit this view, 2014). This is also a view that falls within the CAD spectrum.
Would part-hood views present problems to our view that the relationship between consciousness and attention falls somewhere on a spectrum of dissociation? If one takes the part-hood relation literally, one can see potential problems. In the first view, it is not that attention is really separate or independent from consciousness. Rather, attention characterizes one form of conscious phenomenon—it is only one type of conscious phenomenon and not truly dissociated. And similarly for the second view—attention is the genus and conscious attention the species. The CAD spectrum, however, seems to entail that any type of dissociation between consciousness and attention is a type of independence from one another. Notice that CAD allows for many types of overlap between consciousness and attention, so it has enough conceptual space to accommodate crucial aspects of part-hood views.
But even if the CAD spectrum and the part-hood views were not fully compatible, it is useful to think about part-hood views in terms of CAD. Consider that there could be many forms of consciousness, only some of which would involve attention. Alternatively, there could be many forms of unconscious attention and only a few forms of attention that are conscious. These are compatible options according to CAD, but not according to part-hood views: there may be many forms of consciousness and many forms of attention. CAD can accommodate these possibilities without problem. So there is the risk of engaging in merely verbal disputes if one insists that attention is just part of consciousness and vice versa. The goal is to achieve a comprehensive framework that could help elucidate the complex phenomena of consciousness and attention, and not to merely win theoretical debates about how to fit consciousness and attention into a pie chart. We believe CAD offers the best approach to avoid this problem because it is the most flexible and empirically well-supported framework.
More research is needed to fully understand where human consciousness lies on the CAD spectrum. An intriguing possibility is that human consciousness falls in a unique section of CAD, while other species and artificial intelligent systems (if they ever become human-like in their cognitive capacities) would fall in different sections of CAD. For example, some animals likely have phenomenal consciousness and certainly manifest abilities of attending to their environment and even the intentions of conspecifics, but they are not likely to have conceptual or linguistically formatted forms of attention or self-conscious reflective attention, thereby presumably reducing their degree of CAD.
To illustrate this point, consider pain. Pain can be distinguished in two broad categories: nociceptive and empathic (e.g., see Zaki, et al., 2016). Nociceptive pain is caused by bodily damage and is associated with the experience of aching pain. Empathic pain is the negative experience we feel when we attend to someone else’s pain. Researchers are currently investigating the ways in which nociceptive and empathic pain overlap. It seems clear, however, that this distinction may be more prevalent in some species than others, because the overlap between nociceptive and empathic pain encompasses attention to pain, attention to detection of pain in others, and reacting empathically to their pain—all of which are not necessarily the same capacities. CAD is ideal to accommodate this possibility and to help distinguish the strictly experiential aspects of pain from distinct ways in which one attends to pain in oneself and others (and to what extent).
With respect to artificial intelligence (AI), we argued in a previous post that while AI may reproduce or simulate intelligence and even human-like attention, it is highly unlikely that AI systems will develop empathic forms of conscious attention. Another interesting possibility presents itself here, given CAD. Perhaps AI systems will be capable of developing complex forms of attention routines for feature detection, but incapable of having any kind of experience we associate with phenomenal consciousness. In the CAD spectrum, these systems will have forms of attention we associate with humans, but unlike animals, there will be no overlap with phenomenal consciousness and attention.
A critical implication of interpreting CAD not only as a framework to understand human consciousness and attention, but also as an approach to how other conscious or intelligent systems differ from humans, is that it may potentially help inform moral issues. Phenomenal consciousness may have evolved to engage motivationally and emotionally the whole cognitive and behavioral system, but it also makes our lives valuable and meaningful (Humphrey, 2011). Obviously, moral standing is an issue that must involve ethical theories, and that goes beyond the implications of CAD. Nevertheless, CAD still makes a helpful distinction for ethical theorizing: to the extent that animals share morally relevant conscious experiences, to that extent they deserve to be treated morally. It appears, however, that this situation is decisively different with respect to AI—no matter how complex they evolve in their human-like attention, they will never reach the moral standing that motivations and conscious awareness provide. That is, they will never be human.
- Carlos Montemayor & Harry Haladjian
Graziano, M. S. A., & Webb, T. W. (2014). A mechanistic theory of consciousness. International Journal of Machine Consciousness, 6(2), 1-14.
Haladjian, H. H., & Montemayor, C. (2015). On the evolution of conscious attention. Psychonomic Bulletin & Review, 22(3), 595-613.
Humphrey, Nicholas. 2011. Soul Dust: The Magic of Consciousness. Princeton: Princeton University Press.
Kriegel, U. (2015). The Varieties of Consciousness. New York, NY: Oxford University Press.
Montemayor, C., & Haladjian, H. H. (2015). Consciousness, Attention, and Conscious Attention. Cambridge, MA: MIT Press.
Zaki, J., Wager, T. D., Singer, T., Keysers, C., & Gazzola, V. (2016). The anatomy of suffering: Understanding the relationship between nociceptive and empathic pain. Trends in Cognitive Sciences, 20(4), 249-259.