Meta's AI Content Labels Easily Removed, Rendering Efforts Ineffective
-
Meta's plan to label AI-generated content using invisible watermarks will do little to tackle disinformation, as bad actors can easily circumvent it by using unsecured AI tools or removing the watermarks.
-
Simple screenshots allowed the authors to remove AI watermarks from images, rendering Meta's labeling efforts ineffective.
-
Better solutions could include maximally indelible watermarks hidden imperceptibly in content and detector tools to identify AI-generated media.
-
Voluntary industry commitments have lacked timelines and concrete actions to implement improved watermarking. Regulations may be needed.
-
Congress should pass laws protecting elections from AI-accelerated threats, including banning lies about voting details that suppress turnout.