Google's AI Chatbot Faces Bias Concerns Over Toxicity Definitions
-
Google's AI chatbot Gemini was trained using datasets that define "toxicity", which could bake in bias into the model's outputs. For example, some sites were labeled with a political bias, like Breitbart as right-leaning.
-
Gemini relies on classifiers and filters intended to make its responses more inclusive, but expert Kris Ruby argues these could be ideologically skewed if toxicity is poorly defined.
-
The core issue is how toxicity and safety are defined behind the scenes at Google, which then shapes the actions and outputs of AI systems like Gemini.
-
Altering historical facts with AI could impact the information landscape. Over-optimization to ideological beliefs could explain issues with Gemini's outputs.
-
If Google doesn't align Gemini to user intent, it risks building a "useless" product and losing ground to competing AI search tools.