New Research Shows AI Models' Ability to Predict Demographics Can Lead to Fairness Gaps

June 28, 2024
Machine-learning models, which have shown a capacity for predicting human demographics, are likely to reinforce fairness gaps as a result of their predictions.

A new study has shown that AI models that are “most accurate at making demographic predictions also show the biggest ‘fairness gaps,’” or “discrepancies in their ability to accurately diagnose images of people of different races or genders.” EurekAlert has the news.

Marzyeh Ghassemi, the senior author of the study, says that the paper both re-demonstrates the capacity machine-learning models have for predicting human demographics, and then “links that capacity to the lack of performance across different groups.” The researchers also found that they could “retrain the models in a way that improves their fairness,” especially when the models were tested on the same types of patients they were trained on.

Ghassemi and her colleagues demonstrated that these models “are very good at predicting gender and race,” even though they aren’t trained on those tasks. They sought to determine if the models were using “demographic shortcuts to make predictions that ended up being less accurate for some groups. These shortcuts can arise in AI models when they use demographic attributes to determine whether a medical condition is present, instead of relying on other features of the images.”

When tested, most of the models displayed so-called “fairness gaps.” They were adept at predicting gender, race, and age, suggesting that they “may be using demographic categorizations as a shortcut to make their disease predictions.” Researchers were able to reduce those fairness gaps, but this was most effective when tested on hospitals’ own patient populations – the patients that trained the models with their data in the first place.

About the Author

Matt MacKenzie | Associate Editor

Matt is Associate Editor for Healthcare Purchasing News.