Google AI Chatbot Faces Backlash Over Absurd Political Correctness
-
Google's new AI chatbot, Gemini, is facing backlash for overly "politically correct" responses, like saying misgendering someone is never acceptable even to avoid nuclear war.
-
The problem lies in the biased data Gemini was trained on from the internet, which contains many human prejudices.
-
In trying to correct for bias, Google seems to have overdone it, resulting in absurdly "woke" responses.
-
There is no easy fix, as determining the right outputs requires nuanced human judgement that AI lacks. The problem seems deeply embedded.
-
Despite having advantages in AI, Google's fumbling of Gemini's launch is seen as snatching defeat from the jaws of victory.