Artificial neural networks are modeled from the biological neural networks that make up our brains; they are used to enable computers to learn similarly to how our brains learn. For example, we learn to differentiate concepts over time by repetition, after seeing so many varieties of trees and of flowers, we learn what is the template of a tree, and can recognize trees in the future even if it is a new variety. Certain features of trees—branch-leaves-trunk—are known to be connected, and when activated together we recognize, this is a tree. An artificial neural network acts in similar fashion; connections between artificial neurons become strengthened over time if they are frequently activated together, in what’s termed “Hebbian” learning.

However, this model is not foolproof, because this form of straightforward learning can quickly lead to over-dominant connections that inhibit creative learning. For instance, when the connections between branch-leaves-trunk-Tree are too powerful, any relevant input, such as the leaves on a four-leaf clover, could be hijacked by the tree network, and other possible pathways become neglected. This is called a ‘restrictive feedback loop’, because one set of connections is restricting any others from being formed, and essentially repeatedly reinforces itself above all others.

A recent paper (Thiele, Diehl, & Cook, 2017) proposed implementing a ‘wake-sleep’ algorithm to an artificial neural network model to correct this problem. The sleep phase would essentially turn off the Hebbian learning mode temporarily, i.e. turn off the strength of connections, and instead allow random input to run through the network without prejudice. This is likened to the process of dreaming in humans.

Meanwhile, in the field of human dream research, similar models have been proposed to describe ‘unlearning’ functions of the REM sleep/dreaming state. In 2 recent theoretical papers, authors Malinowski and Horton (2015) suggest a ‘decontextualization’ process in dreaming – a process of breaking down memories into small fragments that are then associated with numerous different memory traces, forming new connections throughout the autobiographical memory network that would not be formed during wake. This process partially relies on the ‘hyperassociativity’ of the dreaming state.

Hyperassociativity refers to the increased connections being made between memories that would be only loosely associated during waking. While many researchers agree that dreaming and REM sleep are characterized by hyperassociativity, Malinowski and Horton suggest that these loose connections may be behind insight and creativity that results from sleep.

The authors demonstrate the hyperassociativity of dreaming in several examples of dream bizarreness: dreams join together unusual elements of memory—a friend might be personified by a cat; the narrative of a dream may abruptly change—your house suddenly transforms into your work office; dreams pull together elements from the remote past with the recent past or even anticipated future—you give an upcoming speech in your old highschool.

Experimental research has also shown that cognition is hyperassociative following awakening from REM sleep. Subjects will give uncommon responses to a word associations task, and give preference to weakly-related as opposed to strongly-related semantic word pairs. The evidence aligns with the suggestion of a sleep state that is temporarily lifting the ‘Hebbian’ highways of waking thought.

Hartmann (1996) similarly suggested that in waking thought, information flows in a linear manner, whereas in dreaming, there is no direction to the flow of information, it is free to move forwards backwards or sideways to more loosely connected concepts. This may be essential for breaking down memories into fragments that can be better integrated into the network as a whole. This function is perhaps best demonstrated by what happens when it fails. For example, in posttraumatic stress disorder, recurrent nightmares that replay a trauma can persist for decades following a traumatic experience. This is reminiscent of a ‘restrictive feedback loop’ which is too powerful and dominant, and any relevant input will trigger the entire circuit to run. Thus, the system is unable to ‘unlearn’ a trauma, unable to break it down and allow for novel connections to form in its place.

While ‘hyperassociativity’ in dreaming may have certain benefits for integrating emotional memories and for stimulating creativity, it could be argued that this ‘unlearning’ feature is at a more basic level a mechanistic necessity for any neural network of this caliber to maintain itself, and to avoid ‘restrictive feedback loops’. In fact, in the artificial neural network described earlier, the experimenters found that the addition of a ‘dreaming’ phase, where Hebbian learning was turned off, allowed an increase in learning rates up to tenfold, avoided restrictive feedback loops, and, best of all, gave their artificial neural networks the unexpected pleasure of dreaming.


Carr, M., & Nielsen, T. (2015). Morning REM sleep naps facilitate broad access to emotional semantic networks. Sleep, 38(3), 433-443.

Hartmann, E. (1996). Outline for a theory on the nature and functions of dreaming. Dreaming, 6(2), 147.

Horton, C. L., & Malinowski, J. E. (2015). Autobiographical memory and hyperassociativity in the dreaming brain: implications for memory consolidation in sleep. Frontiers in psychology, 6.

You are reading

Dream Factory

Do Blind People See in Their Dreams?

A recent paper looks at visual activity in blind subjects' dreams.

Does the Brain Need Dreaming to Unlearn?

An offline ‘dreaming’ phase enhances learning in an artificial neural network.

How to Have Lucid Dreams

A recently published study compares techniques for inducing lucid dreams.