AI Manipulation of Media Creates Concerns Over Disinformation Ahead of Elections
-
AI technology like deepfakes can be used to manipulate audio and video to potentially influence elections through disinformation. Examples include the fake Biden robocall in New Hampshire and AI-generated images in political ads.
-
Social media companies and AI developers have introduced policies and tools to try detecting and labeling AI-manipulated content, but enforcement remains inconsistent.
-
Individuals can watch for common mistakes AI makes in generating text, images, and video that seem slightly "off," though deepfakes are becoming more sophisticated.
-
Fact-checking and searching for original higher resolution versions of suspect images can help determine if they are AI-generated. Google's About This Image tool also detects AI images.
-
Free and paid AI detection tools and browser extensions can automatically scan for AI-manipulated text, images, and video, but they have limitations like false positives.