Researchers find bias in AI models for cancer therapies
-
Rice University researchers found bias in machine learning models used for immunotherapy research. The data was skewed toward higher-income communities.
-
The models predict peptide-HLA binding to help develop personalized immunotherapies. Bias in the models means therapies may not work equally well for all populations.
-
The researchers tested "pan-allele" binding predictors and found they were not actually universal across populations.
-
Machine learning models reflect biases in their training data. Though datasets appear incomplete, connections should be made to historical and economic factors affecting unrepresented populations.
-
The publicly available data was a good start, but more unbiased data is needed so immunotherapies can help people equally across demographic groups.