Skip to main content
Cognition

How Our Brain Learns to Synchronize Sights and Sounds

New research shows synchronicity training generalizes across the visual field.

Key points

  • Sights and sounds can be perceived as synchronized even if there is a short lag between them.
  • A new research study investigated the impact of training with feedback on visual-auditory synchrony.
  • Findings show that even if training is restricted to one location, learning happens across the visual field.

Our brain seamlessly integrates visual and auditory information to create a coherent, synchronized perception of our environment. This is accomplished despite large differences in how quickly visual and auditory inputs are processed.

On one hand, visual information travels at the speed of light and arrives at our eyes nearly instantly from the source. Auditory cues travel much slower, relatively, at around 343 meters per second. The delay between when visual and auditory information arrives from a source can be noticed; for example, when you see lightning flash and hear the sound of thunder a few seconds later.

On the other hand, the brain processes auditory information more quickly than visual information. This shows up when measuring reaction times to a flash vs. a beep: People tend to be about 20 milliseconds faster to react to a sound than a visual event.

Temporal Binding Window (TBW)

In order to integrate auditory and visual information across situations where the two types of sources arrive a slightly different times, the brain incorporates a type of buffer, called the audio-visual temporal binding window (or TBW) in which sounds and sights are perceived as synchronized even if the timing differs by a few dozen milliseconds. Research has shown that the TBW can be between 160 ms and 250 ms for simple stimuli like beeps and flashes. That means that even if the timing of a visual flash and an auditory beep differ by 200 ms, the brain can still perceive these as happening at the same exact time. For more complex stimuli, such as human language, the TBW is even greater.

Previous research has shown that an individual's TBW can be modified through perceptual training with feedback (Powers et al., 2009). In these studies, participants are given many trials in which they respond as to whether a beep and a flash occur simultaneously. After sufficient training trials in which they receive feedback on whether they were correct or not, people's TBWs shrink. In other words, they become more accurate at identifying when the beep and flash occur at slightly different times. However, previous research did not test the generalizability of this type of training.

In a new study published in this month's issue of Perception, Patrick Burns and colleagues asked whether synchronicity training in one part of the visual field would transfer to an untrained part of the visual field. Participants completed two training sessions in which they judged the synchronicity of pairs of audiovisual stimuli (flashes and beeps) but only received training in one hemifield—either the right or left side of the visual field. Before and after training, each participant's TBW was measured in both hemifields. Their findings showed remarkable transfer from one hemifield to the other. That is, participants who were trained in the right visual field showed just as much improvement (shrinking) of their TBW in the (untrained) left visual field as in the trained right visual field.

The results suggest that the neural mechanisms responsible for learning to improve audio-visual synchronization are occurring at a high level of perceptual processing, not tied to specific retinal locations. When synchronization is trained in one region of the visual field, the learning generalizes across the entire visual field. Open questions still remain; for example, does synchronicity training with simple stimuli such as flashes and beeps generalize to other types of synchronicity judgments such as human speech or other complex audio-visual events? And do these training effects survive for longer periods of time? In the current studies, participants were tested just after completing the two-day training, but it remains unknown whether the effects would be seen months or years later.

References

Bruns, P., Paumen, T., & Röder, B. (2024). Perceptual training of audiovisual simultaneity judgments generalizes across spatial locations. Perception, 03010066251342010.

Powers, A. R., Hillock, A. R., & Wallace, M. T. (2009). Perceptual training narrows the temporal window of multisensory binding. Journal of Neuroscience, 29(39), 12265-12274.

advertisement
More from Nicolas Davidenko Ph.D.
More from Psychology Today