New Model to Reduce AI Bias in Life Sciences and Biomedicine

Researchers create an AI framework to debias machine learning in biology.

Posted Mar 03, 2021 | Reviewed by Kaja Perina

TheDigitalArtist/Pixabay
Source: TheDigitalArtist/Pixabay

In fields such as biotechnology, medicine, pharmaceutical, health care, and life sciences, the need to ensure human health and safety is of the highest priority when deploying artificial intelligence (AI) machine learning. Researchers at the Broad Institute of MIT and Harvard and their collaborators created a framework to audit and debias AI machine learning in life sciences and published their recent study in Communications Biology.

“Biases in data used to train machine learning (ML) models can inflate their prediction performance and confound our understanding of how and what they learn,” wrote Broad Institute of MIT and Harvard researchers Fatma-Elzahraa Eid, Haitham Elmarakeby, Yujia Alina Chan, Nadine Fornelos, Eliezer Van Allen, and Kasper Lage, along with Mahmoud ElHefnawi at the National Research Centre in Giza, Egypt, and Lenwood Heath at Virginia Polytechnic Institute and State University. “Although biases are common in biological data, systematic auditing of ML models to identify and eliminate these biases is not a common practice when applying ML in the life sciences.”

The research team first developed a framework for debiasing for protein–protein interaction (PPI), then applied it to drug-target bioactivity and MHC-peptides binding.  Predicting protein-protein interactions is critical to cellular functions of organisms, and important to understand for bioengineering and de novo drug discovery. In medicine, drug-target bioactivity refers to the affect the drug has on a living tissue or organism. Major histocompatibility complex (MHC) is a group of genes found in vertebrates that codes for proteins on the surfaces of cells that enable the immune system to identify foreign matter.

“To illustrate the broad applicability of our auditing framework in general and the applicability of the developed auditors to other paired-input applications, we adapted the auditing framework to two additional applications of important therapeutic interest: predictions of drug-target bioactivity and MHC-peptide binding,” the researchers wrote.

The auditing machine learning framework has four modules: benchmarking, bias interrogation, bias identification, and bias elimination.

For the first module, the researchers established baseline performance by benchmarking classifiers on separate datasets. Out of the seven classifiers, five used support vector machines (SVMs) with different kernels, one used random forest, and one used a deep learning based stacked autoencoder. A combination of MATLAB with LibSVM library was used for the support vector machine classifiers. Three databases of human proteins were used. The classifiers were trained on subsets of a specific dataset such as protein pairs. The researchers reported that the “best benchmarking performance across all classifiers was high” as measured by the average area under the curve (AUC).

“Robust biological ML models should generalize to independent datasets,” the researchers wrote.

In artificial intelligence machine learning, generalization refers to the ability for the algorithm to apply what it learned with a high degree of accuracy during training to new data it has not seen before. Robustness in this sense refers to the ability for the machine learning algorithm to perform well given novel input data.

To achieve this, the team created a Generalizability Auditor as the second module. This module compares a model’s original performance to that of an independent dataset called the Generalization dataset, in efforts to detect areas of bias.

The detected biases along with bias hypotheses are input to the third module that audits the bias for identification. This module either rejects or confirms the formulated bias hypotheses.

The final module is for eliminating the bias. It tests the bias that was identified in the prior step by assessing how the classifiers generalize after to separate data sets.

“When there is insufficient signal in the training data representation, ML models could learn primarily from representational biases in the training data,” the researchers discovered. “This appears to predominantly influence paired-input ML applications and can be misleading if not illuminated through auditing.”

The researchers recommend that machine learning scientists who are using AI for biological purposes to develop a “community-wide stance on the systematic auditing of ML models for biases,” and have provided code, resources, and methods on the GitHub repository. With this proof-of-concept, researchers have provided a way to perform machine learning to predict biological relationships with reduced bias for greater accuracy and better outcomes.

Copyright © 2021 Cami Rosso All rights reserved.