Can machines dream?
AI machines will radically alter the way we dream.
Posted Apr 27, 2019
We all know that artificially intelligent (AI) machines now surround us and are doing the things that we once thought only intelligent human beings could do. They are performing surgeries, winning chess and “Go” championships, doing calculations (at lightning speeds), using language, manipulating objects, simulating worlds, and most importantly learning. But can they dream? I am inclined to think so, though I am not entirely convinced. Dreaming involves a subtle mixture of virtual simulations of various types of social interactions, along with processing of very high-level, abstract information about all kinds of highly unusual, though weak, signals that we know very little about.
Functional brain imaging studies of the brain during REM have demonstrated a down-regulation of dorsolateral prefrontal, parietal, and supplementary motor cortex, as well as an up-regulation of limbic and sensory association cortex. REM sleep neurobiology very likely involves changes in 5HT2A receptor signaling. Functional brain changes under the influence of 5HT2A agonists have also been carefully studied with fMRI and other techniques. 5HT2A agonists, like psilocybin and LSD, consistently produce an ensemble of brain changes that involve a global down-regulation of dorsolateral prefrontal (and perhaps parietal) activity and an up-regulation of sensory association areas and limbic emotional areas. This profile of functional brain changes is obviously remarkably similar to what occurs in REM sleep. The higher executive control systems centered in the dorsolateral prefrontal cortex are relaxed or actively inhibited, while the sensory processing and emotional processing centers go into overdrive. The influence of neural activity from lower-order sensory association and limbic regions upon higher-order regions is dramatically enhanced. Presumably under these functional brain conditions, the integrative processing centers within the ventromedial prefrontal regions is inundated with very highly processed sensory information, thus allowing for production of a highly processed and unique form of information—a form of information that must involve very highly refined analyses on the significance of recent sensory and perceptual information.
In my opinion, if machines are going to attempt dreaming, this kind of highly refined processing of very unusual sensory and perceptual information, resulting in the kind of creative insight generated regularly by dreams, will need to be part of the picture. Deep convolutional neural networks (DCNNs) have been particularly successful in making machines intelligent. Not surprisingly, perhaps, they are even more like the human brain than their artificial neural network (ANN) ancestors. DCNNs are particularly adept at the difficult task of object recognition in natural scenes. A more recent development in ANN architectures is generative adversarial networks (GANs). These are deep neural net architectures comprised of two nets. Each net is given the goal to out-predict or outperform the other, thus pitting one against the other. These GANS have proven to be remarkably effective at all kinds of tasks, from object recognition to speech recognition, and they learn far more efficiently than do traditional ANNs.
A defining feature of GANs, ANNs, and DCNNs (that allows them to learn) is the use of back propagation to alter the input layer information in order to minimize categorization errors. In predictive processing theories of mind and brain, the brain is modeled as a prediction machine. It seeks to predict or guess or anticipate what will occur, and then it samples incoming sensory information about what actually did occur in order to compare the actual data against the predicted simulation. It then computes the difference and attempts to minimize that difference in future simulations or predictions. These predictive simulations are theorized to occur at every level of the neuraxis, from primary motor levels right up to the most cognitively abstract levels subserved by most recently evolved areas of the prefrontal lobes. The minimization of prediction error, across multiple hierarchical layers, approximates a process of Bayesian inference such that perceptual content corresponds to the brain’s “best guess” of the causes of its sensory input.
Google's Deep Dream machine productions very successfully mimics the kind of visual phenomena associated with both dreaming and psychedelics. Anyone perusing the artistic productions of Deep Dream, especially those involving human-machine collaborations, is forced to entertain the possibility that it displays a kind of dreaming. Efforts are now underway to link human brains with AI machines and the internet itself in order to produce a kind of dreaming.
Brain-computer interfaces are now giving way to “human brain/cloud interfaces” (“B/CI”). The proposal is not just to plug your brain into your local computer. Instead it is to plug into the world wide web (the cloud). One possible way to make this happen is via the use of so-called “neuralnanorobotics”—that is, the use of very tiny robots to access and navigate the human vasculature, crossing the blood–brain barrier (BBB) and thus accessing the brain. These nano robots would then wirelessly transmit thousands of bits of information per second of synaptically processed and encoded human–brain electrical information to a cloud-based supercomputer or the world wide web. A neuralnanorobotically enabled human B/CI would allow persons to obtain direct, instantaneous access to virtually any facet of cumulative human knowledge stored or accessed via the internet, including millions of dream reports and images.
Brain-cloud interfaces are now being integrated with virtual reality (VR) applications to better simulate the dream experience. Using neuralnanorobotics to transfer information about one person’s experiences to the VR environment will make that person’s experiences directly available to the receiver using VR. Immersive VR may enable vicarious experiences of another person’s dreamworlds. “Transparent Shadowing (TS)” refers to the neuralnanorobotically empowered B/CI technologies that will likely permit users to experience fully immersive, real-time episodes of the experiences of another person. What will become of the dreams of the shadower if they can now literally experience the dreams of another individual? Will their dream images merge or conflict, or both? How will that affect daytime interactions between these people?
Whether or not AI machines will one day be able to experience dreaming, they will radically alter and utterly transform our experiences of dreams.