Skip to main content

Verified by Psychology Today

New Research Shows How to Avoid Bias in AI Brain Models

New Yale-led study shows how to prevent AI bias in neuroscience.

Geralt/Pixabay
Source: Geralt/Pixabay

Artificial intelligence (AI) machine learning is a rapidly emerging brain modeling tool for mental health research, psychiatry, neuroscience, genomics, pharmaceuticals, life sciences, and biotechnology. Scientists have identified areas of potential weak spots in AI brain models and offer solutions on how to prevent bias in a new peer-reviewed study.

The research team led by Abigail Greene at Yale School of Medicine along with co-authors affiliated with Yale University, Brigham and Women’s Hospital, Harvard Medical School, University of Washington, and Columbia University Irving Medical Center’s Department of Psychiatry points out the need to identify why AI algorithms for brain models do not work for everyone when seeking to understand brain-phenotype relationships without biases.

“Individual differences in brain functional organization track a range of traits, symptoms and behaviors,” wrote the scientists. “So far, work modelling linear brain–phenotype relationships has assumed that a single such relationship generalizes across all individuals, but models do not work equally well in all participants.”

They used predictive AI models to relate brain activity to phenotype which were trained and validated on independent data. In genomics, height, eye color, and hair color are examples of phenotypes.

Phenotype refers to how the DNA physically manifests itself. It is the observable traits of an organism as a result of the combination of alleles that they possess for a specific gene and environment. An allele is a variant of a gene formed by a mutation.

To figure out in whom the AI models fail, the team trained AI models to classify neurocognitive test performance using brain activity data. The three datasets used for the study were the Human Connectome Project, the UCLA Consortium for Neuropsychiatric Phenomics, and data collected at Yale during February 2018–March 2021. The Yale dataset consists of participants who completed an MRI scan followed by neuropsychological and self-report battery.

“Across a range of data-processing and analytical approaches applied to three independent datasets, we found that model failure is systematic, reliable, phenotype specific and generalizable across datasets, and that the scores of individuals are poorly classified when they ‘surprise’ the model, performing in a way that is inconsistent with the consensus covariate profile of high and low scorers,” the researchers reported.

This study suggests that AI models of the brain show neurocognitive constructs that are a combination of sociodemographic and clinical factors which yield a stereotypical profile rather than one that will generalize well to the greater population. The researchers recommend collecting extensive and inclusive demographic data.

“That models pick up on and use stereotypical profiles is not always, in itself, a problem for data-driven studies of brain–phenotype relationships,” the researchers wrote.

They urge the characterization of these profiles in order to spot biases and generalizable the model is to different population samples.

“Our results suggest that brain activity-based models are often predicting complex profiles rather than unitary cognitive processes, highlighting the need to consider these profiles and the influence of sample representation on them,” the researchers wrote.

Copyright © 2022 Cami Rosso All rights reserved.

advertisement