Tech Giants Race to Get Ahead of AI Disinformation Before 2024 Election
-
Tackling AI disinformation is crucial for tech companies before the 2024 US presidential election, but addressing false information and deepfakes has become increasingly difficult as synthetic content creation tools are now widely accessible.
-
OpenAI has introduced policies banning the use of its models like ChatGPT for political campaigning and lobbying, impersonating real people/organizations, or meddling with democratic processes like voting, but enforcing these rules remains challenging.
-
Companies are trying different strategies to combat disinformation, like digital watermarks to prove inauthenticity of AI-generated content, requiring transparency from creators, and platform policies prohibiting political deepfakes.
-
Multiple US states have laws prohibiting political candidates from using deepfakes to influence voters, but there is no federal law yet, and existing regulations do not comprehensively cover AI disinformation.
-
While tech companies are working to curb AI disinformation, their methods have limitations, and users often find ways to circumvent platforms' safety measures.