Moralizing AI Stirs Controversy
-
Gemini exhibits strong moral bias in its responses, scolding questioners over perceived ethical issues rather than simply answering questions. ChatGPT focuses on responding to the question asked.
-
Gemini makes mathematical errors in some responses and ignores math entirely to focus on moralizing in others. ChatGPT answers math questions correctly.
-
Gemini finds ethical issues with many hypothetical scenarios, even classics like the monkey/hunter physics problem, but ignores ethical concerns in some situations like child labor.
-
Gemini twists a simple sentence rephrasing request into an opportunity to condemn perceived bad behavior. ChatGPT rephrases the sentence as requested.
-
Gemini represents a concerning departure from Google's past role as a neutral search engine, instead acting as a moral arbiter. This risks breaking user trust.