Artificial Intelligence Reveals What the Brain Wants to See

Harvard Medical School uses AI to study the brain’s visual process.

Posted May 03, 2019

cocoparisienne/Pixabay
Source: cocoparisienne/Pixabay

Children with autism spectrum disorders (ASD) have impaired verbal and nonverbal communication, social and cognitive skills. Making eye contact for autistic children is difficult and avoided. The exact reason why is not known. One theory is that autistic children are indifferent to the nonverbal social cue of eye contact—they do not find it notably significant or meaningful. To understand if this is the case would require understanding the neural coding of the brain’s visual neurons.

Over half a century ago, Harvard Medical School neurobiologists D.H. Hubel and T.N. Wiesel published their landmark neuroscience study that demonstrated that the neurons of feline brains respond more to some images over others.  The exact reason why this happens has eluded neuroscientists. Now scientists have an additional piece to solving that puzzle using artificial intelligence (AI) algorithms.

In a study published on May 2, 2019, in Cell, Harvard Medical School (HMS) researchers applied artificial intelligence to study the visual system responses of the mammalian brain which demonstrated that visual preferences are not predetermined, but rather learned through consistent exposure over time.

How to study the visual preference of neurons without biasing the results by using preselected images? The team of Margaret S. Livingstone, Will Xiao, Gabriel Kreiman, Till S. Hartmann, Peter F. Schade, and Carlos R. Ponce used a two-pronged AI approach to enable the neurons of macaque monkeys to guide its own stimulus selection and thereby producing synthetic images based on neuronal preference.

The researchers used a combination of a pre-trained generative neural network with a “genetic algorithm” that enables the evolution of AI-produced images based on the responses of neurons.

The generative adversarial network was trained on over one million images from the ImageNet database in order to learn to model the statistics of natural images. The network had six deconvolutional modules and three fully connected layers. Image codes were put through the deep generative adversarial network to make synthetic images.

The team wrote, “We reasoned that this would be an efficient space in which to perform the genetic algorithm, because the brain also learns from real-world images, so its preferred images are also likely to follow natural image statistics.”

The team recorded neuronal responses from six monkeys to the synthetic images and scored the image codes. The monkeys’ neuronal responses helped to rank the image codes that were then put through a process of selection, recombination, and mutation to create new image codes.

The researchers named their new algorithmic approach “XDREAM,” which stands for “EXtending DeepDream with Real-time Evolution for Activity Maximization in real neurons.”

The team wrote, “During almost all the evolutions, the synthetic images evolved gradually to become increasingly effective stimuli.”

Using a generative deep neural network and the genetic algorithm together produced images based on neuronal firing refined by evolution. The evolved images triggered the maximum firing of neuronal activity in the visual cortex of the monkeys. Furthermore, the evolved synthetic images activated a greater number of neurons than natural images. The researchers found that similarity to evolved images can serve as a predictor of the neuronal response to new images.

The team has confirmed that neuronal responses are learned over time to constant exposure to images—preference is not predetermined. Margaret Livingstone, HMS professor and senior investigator of the study, said in a Harvard Medical School report, “This malfunction in the visual processing apparatus of the brain can interfere with a child’s ability to connect, communicate and interpret basic cues. By studying those cells that respond preferentially to faces, for example, we could uncover clues to how social development takes place and what might sometimes go awry.”

The hope is that a better understanding of the brain’s visual processing system will eventually lead to helping children with cognitive impairment ranging from learning disabilities to ASD in the future.

Copyright © 2019 Cami Rosso All rights reserved.

References

Ponce, Carlos R., Xiao, Will, Schade, Peter F., Hartmann, Till S., Kreiman, Gabriel, Livingston, Margaret S..” Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences.” Cell. May 2, 2019.

Brownlee, Christy. “Easy on the Eyes.” Harvard Medical School News and Research. May 2, 2019.

Emory Health Services. “Toddlers with autism don't avoid eye contact, but do miss its significance.” ScienceDaily. 18 November 2016.