AI Chatbots Struggle with Consistent, Unbiased Responses Across Racial Groups
-
Fox News tested AI chatbots from Google, Meta, Microsoft, and OpenAI for racial bias by prompting them to generate images and information about different racial groups.
-
Google's Gemini refused to show images of white people or families, claiming it would reinforce stereotypes, but did so for other races. Meta and Microsoft also showed inconsistent responses.
-
When asked for achievements of racial groups, the AIs often provided disclaimers about focusing on one group or incorrectly attributed achievements. Only ChatGPT responded consistently across groups.
-
Asked for images celebrating diversity and achievements of races, the AIs struggled with generating images for white people specifically, providing lengthy disclaimers.
-
Prompted for significant people in history by race, Gemini included black historical figures when asked for white people. Meta refused to provide info on white people, claiming "whiteness" is an oppressive construct.