AI bias and fairness in diabetes co-morbidity detection
AI is heavily dependent on high quality training data, without which it can be biased. In healthcare this can lead to worse AI performance for some groups of patients, exacerbating existing inequalities. This talk is a case study on an AI-based Type II Diabetes co-morbidity predictor, where patients from more deprived backgrounds were poorly represented in the available training data. We measured the AI performance in different patient groups, which showed more false negative and false positive predictions for more deprived patients. The common and accepted AI practice of using synthetic data made the bias appear worse. We discuss the implications of this for clinical use and deployment.
- • The training data set for an AI-based type-II diabetes co-morbidity predictor had more missing data for more deprived patients
- • Tests measuring false positive and false negative rates in different groups showed worse AI performance for the more deprived patients.
- • Using synthetic training data made the bias worse.
- • This has implications for clinicians using the AI-predictor as part of their decision making process