Addressing Bias and Fairness in Medical Imaging AI
-
Machine learning models in medical imaging can perpetuate and exacerbate bias due to issues with the training data. This can lead to unfair and harmful outcomes.
-
Causal reasoning and causal models provide useful tools to analyze issues related to distribution shift and construct models that are invariant or transportable between domains.
-
There are various techniques to try to mitigate bias in datasets and models, but there are still open questions around inherent tradeoffs with accuracy.
-
Standards, auditing procedures, and documentation around datasets and models are important for responsibility and transparency.
-
Taking a structural causal modeling approach can enable counterfactual analysis to deeply probe issues around bias and fairness in machine learning systems.