Concerns Raised Over Generative AI's Content Moderation
-
Generative AI systems like ChatGPT and Google's Gemini are censoring certain content they deem "harmful", including factual information about controversial comedians like Lenny Bruce. This raises concerns about overreach.
-
The definitions of "harm" being used are vague, subjective, and prevent users from accessing information to make their own judgments. This undermines GenAI's promise to expand human reasoning.
-
As GenAI integrates into everyday technologies like search, word processing, and email, restrictive guardrails could stunt knowledge, reasoning and creativity like "digital osteoporosis".
-
While some guardrails are needed to prevent concrete harms, over-implementation driven by regulation could create strong incentives for AI companies to limit human agency.
-
Lawmakers, companies, and civil society should ensure GenAI systems enhance rather than replace human reasoning, with access to multiple perspectives.