دورية أكاديمية

Drop the shortcuts: image augmentation improves fairness and decreases AI detection of race and other demographics from medical images.

التفاصيل البيبلوغرافية
العنوان: Drop the shortcuts: image augmentation improves fairness and decreases AI detection of race and other demographics from medical images.
المؤلفون: Wang R; Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan., Kuo PC; Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan. Electronic address: kuopc@cs.nthu.edu.tw., Chen LC; Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan., Seastedt KP; Department of Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA; Department of Thoracic Surgery, Roswell Park Comprehensive Cancer Center, Buffalo, NY, USA., Gichoya JW; Department of Radiology, Emory University, Atlanta, GA, USA., Celi LA; Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA; Division of Pulmonary Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA; Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA.
المصدر: EBioMedicine [EBioMedicine] 2024 Apr; Vol. 102, pp. 105047. Date of Electronic Publication: 2024 Mar 11.
نوع المنشور: Journal Article
اللغة: English
بيانات الدورية: Publisher: Elsevier B.V Country of Publication: Netherlands NLM ID: 101647039 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 2352-3964 (Electronic) Linking ISSN: 23523964 NLM ISO Abbreviation: EBioMedicine Subsets: MEDLINE
أسماء مطبوعة: Original Publication: [Amsterdam] : Elsevier B.V., [2014]-
مواضيع طبية MeSH: Benchmarking* , Learning*, Aged ; Female ; Humans ; Middle Aged ; Black People ; Brain ; Demography ; United States ; Asian People ; White People ; Male ; Black or African American
مستخلص: Background: It has been shown that AI models can learn race on medical images, leading to algorithmic bias. Our aim in this study was to enhance the fairness of medical image models by eliminating bias related to race, age, and sex. We hypothesise models may be learning demographics via shortcut learning and combat this using image augmentation.
Methods: This study included 44,953 patients who identified as Asian, Black, or White (mean age, 60.68 years ±18.21; 23,499 women) for a total of 194,359 chest X-rays (CXRs) from MIMIC-CXR database. The included CheXpert images comprised 45,095 patients (mean age 63.10 years ±18.14; 20,437 women) for a total of 134,300 CXRs were used for external validation. We also collected 1195 3D brain magnetic resonance imaging (MRI) data from the ADNI database, which included 273 participants with an average age of 76.97 years ±14.22, and 142 females. DL models were trained on either non-augmented or augmented images and assessed using disparity metrics. The features learned by the models were analysed using task transfer experiments and model visualisation techniques.
Findings: In the detection of radiological findings, training a model using augmented CXR images was shown to reduce disparities in error rate among racial groups (-5.45%), age groups (-13.94%), and sex (-22.22%). For AD detection, the model trained with augmented MRI images was shown 53.11% and 31.01% reduction of disparities in error rate among age and sex groups, respectively. Image augmentation led to a reduction in the model's ability to identify demographic attributes and resulted in the model trained for clinical purposes incorporating fewer demographic features.
Interpretation: The model trained using the augmented images was less likely to be influenced by demographic information in detecting image labels. These results demonstrate that the proposed augmentation scheme could enhance the fairness of interpretations by DL models when dealing with data from patients with different demographic backgrounds.
Funding: National Science and Technology Council (Taiwan), National Institutes of Health.
Competing Interests: Declaration of interests L.A.C. reports consulting fees from Philips, payment or honoraria for lectures, presentations, speakers bureaus from Stanford University, University of California San Francisco, University of Toronto (visiting professor), and Taipei Medical University. L.A.C. reports support for attending meetings from Australia New Zealand College of Intensive Care Medicine, University of Bergen, University Medical Center Amsterdam, Académie Nationale de Médecine, Doris Duke Foundation. L.A.C. reports leadership or fiduciary role in other boards, society, committee or advocacy group, paid or unpaid from PLOS Digital Health and Lancet Digital Health. L.A.C. reports cloud compute credits from Oracle. R.W., P.C.K, L.C.C., and K.P.S, have nothing to declare.
(Copyright © 2024 The Authors. Published by Elsevier B.V. All rights reserved.)
فهرسة مساهمة: Keywords: Augmentation; Bias mitigation; Deep learning; Fairness; Shortcuts
تواريخ الأحداث: Date Created: 20240312 Date Completed: 20240415 Latest Revision: 20240712
رمز التحديث: 20240712
مُعرف محوري في PubMed: PMC10945176
DOI: 10.1016/j.ebiom.2024.105047
PMID: 38471396
قاعدة البيانات: MEDLINE
الوصف
تدمد:2352-3964
DOI:10.1016/j.ebiom.2024.105047