Generative AI is being used to create misinformation that is increasingly difficult to distinguish from reality, posing significant threats such as manipulating public opinion, disrupting democratic processes, and eroding trust, with experts advising skepticism, attention to detail, and not sharing potentially AI-generated content to combat this issue.
AI on social media platforms, both as a tool for manipulation and for detection, is seen as a potential threat to voter sentiment in the upcoming US presidential elections, with China-affiliated actors leveraging AI-generated visual media to emphasize politically divisive topics, while companies like Accrete AI are employing AI to detect and predict disinformation threats in real-time.
Chat2024 has soft-launched an AI-powered platform that features avatars of 17 presidential candidates, offering users the ability to ask questions and engage in debates with the AI replicas. While the avatars are not yet perfect imitations, they demonstrate the potential for AI technology to replicate politicians and engage voters in a more in-depth and engaging way.
AI-generated content is becoming increasingly prevalent in political campaigns and poses a significant threat to democratic processes as it can be used to spread misinformation and disinformation to manipulate voters.
AI-generated images have the potential to create alternative history and misinformation, raising concerns about their impact on elections and people's ability to discern truth from manipulated visuals.
Foreign actors are increasingly using artificial intelligence, including generative AI and large language models, to produce and distribute disinformation during elections, posing a new and evolving threat to democratic processes worldwide. As elections in various countries are approaching, the effectiveness and impact of AI-produced propaganda remain uncertain, highlighting the need for efforts to detect and combat such disinformation campaigns.