IBM Calls for More Diverse AI Teams to Build Fairer, Safer Models
-
Diverse perspectives lead to more accurate AI models, as shown mathematically by the diversity prediction theorem. Homogenous groups create biased models.
-
Current AI models overrepresent English and a few common architectures like BERT. More diverse data and architectures are needed.
-
Model owners often don't understand how to evaluate risks around fairness and bias. Diverse, multidisciplinary teams provide psychological safety for these conversations.
-
IBM created a Trustworthy AI Center of Excellence to train practitioners on AI ethics with global, diverse teams working on real use cases.
-
Hiring should focus on people who can curate representative data and have the discernment to recognize where AI can cause harm. Emotional intelligence is key.