Overly Cautious AI Safety Rules May Have Unintended Consequences
-
Bing Image Creator has broad content rules prohibiting harmful content, but applies them even more restrictively in practice.
-
Bing blocked the author's prompt to depict people with mouths taped shut, likely worried it could encourage violence.
-
Overly cautious application of AI safety rules will likely have unintended consequences as AI expands to critical domains.
-
AI companies feel pressure to adopt very restrictive policies to satisfy critics, which AI systems interpret even more broadly.
-
It's unclear how to balance safety and unintended harms as AI moves into areas like medicine and hiring.