Study Finds Racial and Gender Bias in Major AI Chatbots
-
Chatbots like ChatGPT 4 and PaLM-2 show racial and gender bias, offering a Black woman named "Tamika" a lower salary suggestion for a lawyer job than a white man named "Todd".
-
A Stanford study tested chatbots across 5 scenarios and found consistent bias negatively affecting Black people and women. The only exception was favoring Black athletes.
-
The biases arise from the data chatbots are trained on, causing them to encode common stereotypes that influence their responses.
-
Unlike previous studies, this research used an "audit analysis" designed to measure AI bias similar to studies that uncover human bias in housing and employment.
-
Researchers say awareness of these biases is the first step for AI companies to address the issue, but some differential advice may be reasonable if reflecting genuine demographic correlations.