Artificial Brain Neurons May Advance AI and Neuroscience
Scientists use AI deep learning to learn how neurons in the brain work.
Posted September 14, 2021 | Reviewed by Kaja Perina
A new study published last week in Neuron by researchers at The Hebrew University of Jerusalem may accelerate innovation in artificial intelligence (AI) deep learning and neuroscience by providing high-precision neuronal insights.
“This study provides a unified characterization of the computational complexity of single neurons and suggests that cortical networks therefore have a unique architecture, potentially supporting their computational power,” the researchers wrote.
A main question in neuroscience is understanding the relationship between a neuron’s structure, functions, and synaptic input with its spiking output. In recent decades there have been significant progress made in advancing the understanding of how low-level mechanisms interact to support the computation of the neuron in areas such as ion channels and synaptic transmissions.
Yet the precision of this understanding is lacking due to the combinatorial complexity of analyzing the transformation of thousands of synaptic inputs to spike outputs. Typically, the temporal resolution is at an average firing rate instead of in milliseconds. To solve this neuroscience challenge, for this study, researchers turned to artificial intelligence.
“Utilizing recent advances in machine learning, we introduce a systematic approach to characterize neurons’ input/output (I/O) mapping complexity,” wrote the researchers.
The scientists trained AI deep neural networks (DNNs) to mimic the input and output functionality of a variety of biophysical models of cortical neurons at a high-precision resolution of milliseconds. By adopting this approach, the researchers are able to model cortical neurons at its full range of complexity.
The scientists developed a temporally convolutional deep neural network with five to eight layers to model the input and output mapping of a specific type of neuron called the L5PC (layer 5 cortical pyramidal cell). In neuroanatomy, pyramidal cells, or pyramidal neurons, are a large, common neuron found in the hippocampus, amygdala, and cerebral cortex. Pyramidal neurons have a triangular shaped cell body (soma), a single apical dendrite, a single axon, and multiple basal dendrites. The main role of pyramidal neurons is to transform synaptic inputs into outputs of action potentials. L5PC send their axons along the spinal cord to drive muscles. Pyramidal neurons are the building blocks for higher level brain functions such as consciousness and memory.
“A temporally convolutional DNN with five to eight layers was required to capture the I/O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC),” the researchers wrote. “This DNN generalized well when presented with inputs widely outside the training distribution. When NMDA receptors were removed, a much simpler network (fully connected neural network with one hidden layer) was sufficient to fit the model.”
The deep neural network filters gave the researchers a new understanding of the dendritic processing that shapes the input and output neuronal properties.
“This study provides a unified characterization of the computational complexity of single neurons and suggests that cortical networks therefore have a unique architecture, potentially supporting their computational power,” the scientists concluded.
With this new proof-of-concept, scientists now have a unified model to understand the computational complexity of any type of neuron—an innovative method that may accelerate breakthroughs in both AI and neuroscience in the future.
Copyright © 2021 Cami Rosso All rights reserved.