Skip to main content

Verified by Psychology Today

Artificial Intelligence

How AI and Human Brains Are Converging

AI study discovers areas where LLMs are becoming more like human brains.

NickyPe/Pixabay
Source: NickyPe/Pixabay

Is it just your imagination, or is artificial intelligence (AI) becoming more like biological brains? Are newer large language models (LLMs) evolving in a way that resembles how human brains function? Researchers at Columbia University and the Feinstein Institutes for Medical Research Northwell Health published a new study in Nature Machine Intelligence that compared multiple LLMs with actual neural recordings of human brain activity and discovered areas where the two are merging.

Artificial intelligence machine learning contains inherent algorithmic complexity with its many deep processing layers of the artificial neural network making it impossible to forensically unravel precisely how it derives its output and predictions. Identifying exactly how AI deep neural networks reach their decisions post-processing remains a black box.

As LLMs change over time with each latest version release, one thing remains constant: the underlying factors in artificial language processing that are contributing to the convergence remain challenging to identify.

“Although previous research has demonstrated similarities between LLM representations and neural responses, the computational principles driving this convergence—especially as LLMs evolve—remain elusive,” wrote first author Gavin Mischler along with co-authors Yinghao Aaron Li, Stephan Bickel, Ashesh Mehta, and Nima Mesgarani.

The team of researchers evaluated 12 similarly sized, open-source pre-trained LLM models with different linguistic abilities. Specifically, the scientists analyzed LLMs with seven billion parameters (LLaMA, LLaMA2, Falcon, MPT, LeoLM, Mistral, XwinLM), 6.9 billion parameters (Pythia), and 6.7 billion parameters (FairseqDense, OPT, CerebrasGPT, Galactica).

Where to find the human brain activity data? One of the great challenges in neuroscience is having brain activity data from living humans for obvious reasons. Thus, when there are patients who require brain recordings as part of their treatment and require neurosurgery and consent to participate in neuroscience studies, it presents a rare opportunity for researchers.

For this study, the scientists recorded the brain activity of eight consenting participants who were already undergoing neurosurgery to treat drug-resistant epilepsy. To identify the areas in the brain responsible for the epileptic seizures, special electrode sensors called intracranial electroencephalography (iEEG) were implanted within the cranium. These types of electrodes are used for invasive brain-computer interfaces (BCIs). As the study participants listened to recordings of voice actors reading story passages and conversations, their brain activity was recorded by the implanted iEEG electrodes.

“Here we used intracranial electroencephalography recordings from neurosurgical patients listening to speech to investigate the alignment between high-performance LLMs and the language-processing mechanisms of the brain,” wrote the scientists.

To create scoring benchmarks, the AI models were provided the same content as the human study participants and were given reading comprehension and commonsense reasoning tasks like the listening comprehension task that the eight human participants performed. An overall LLM performance score for each of the 12 LLMs was calculated as the average of the reading comprehension and commonsense reasoning task scores.

The team discovered that the LLMs that performed the best showed “a more brain-like hierarchy of layers.” Namely, Mistral performed the best, followed by XwinLM, LLaMA2, LLaMA, Falcon, MPT, LeoLM, FairseqDense, OPT, Pythia, CerebrasGPT, and Galactica, respectively.

What sets this study apart from other studies examining the biological brain versus AI deep learning is that this study compares different LLM models using a consistent, single architecture of the stacked transformer decoder as a basis.

The main takeaway from their analysis is that the LLMs demonstrated hierarchies that echoed the neurobiological areas of the brain’s cortex responsible for sound and language processing.

The researchers attribute the convergence of LLMs and human brains to the hierarchical structure of language where smaller components, such as articulatory features, phonemes, and syllables, gradually build up to larger language components, such as words, sentences, and phrases.

“These findings reveal converging aspects of language processing in the brain and LLMs, offering new directions for developing models that better align with human cognitive processing,” concluded the scientists.

Copyright © 2024 Cami Rosso All rights reserved.

advertisement
More from Cami Rosso
More from Psychology Today
More from Cami Rosso
More from Psychology Today