AI algorithms trained to analyze medical images were found to be able to predict patients' self-reported race with high accuracy, raising concerns about potential bias in healthcare AI systems that could lead to discriminatory treatment.
Researchers from over 20 institutions including Emory University, MIT, and Stanford conducted a study testing AI algorithms on five types of medical imagery including chest x-rays, hand x-rays, and mammograms from patients who identified as Black, white, and Asian. The algorithms were trained to predict patient race from these medical scans and achieved accuracy rates of 80-99% across different scan types, with most algorithms correctly identifying Black patients over 90% of the time. The study has not yet been peer reviewed but the results and code were posted online. Researchers could not determine what visual cues the algorithms used to make these predictions, even when images were degraded or blurred. The concern is that these algorithms, when used for medical diagnosis, could learn inappropriate associations between race and medical conditions due to historical inequalities in healthcare data, potentially leading to biased diagnoses or treatment recommendations. The study also examined a separate healthcare algorithm used by Optum that was found to systematically underestimate the health needs of Black patients by using healthcare costs as a proxy for health needs, effectively reducing the proportion of Black patients receiving extra care from 50% to less than 20%.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed