### Summary
Rep. Jake Auchincloss emphasizes the need to address the challenges posed by artificial intelligence (AI) without delay and warns against allowing AI to become "social media 2.0." He believes that each industry should develop its own regulations and norms for AI.
### Facts
- Rep. Jake Auchincloss argues that new technology, including AI, has historically disrupted and displaced parts of the economy while also enhancing creativity and productivity.
- He cautions against taking a one-size-fits-all approach to regulate AI and advocates for industry-specific regulations in healthcare, financial services, education, and journalism.
- Rep. Auchincloss highlights the importance of holding social media companies liable for allowing defamatory content generated through synthetic videos and AI.
- He believes that misinformation spread through fake videos could have significant consequences in the 2024 election and supports amending Section 230 to address this issue.
- Rep. Auchincloss intends to prioritize addressing these concerns and hopes to build consensus on the issue before the 2024 election.
- While he is focused on his current role as the representative for the Massachusetts Fourth district, he does not rule out future opportunities in any field but expresses his satisfaction with his current position.
Social media companies such as Facebook and YouTube are scaling back efforts to combat political misinformation, which is expected to have significant implications for the 2024 presidential election in the US.
The rapid development of artificial intelligence poses similar risks to those seen with social media, with concerns about disinformation, misuse, and impact on the job market, according to Microsoft President Brad Smith. Smith emphasized the need for caution and guardrails to ensure the responsible development of AI.
AI Algorithms Battle Russian Disinformation Campaigns on Social Media
A mysterious individual known as Nea Paw has developed an AI-powered project called CounterCloud to combat mass-produced AI disinformation. In response to tweets from Russian media outlets and the Chinese embassy that criticized the US, CounterCloud produced tweets, articles, and even journalists and news sites that were entirely generated by AI algorithms. Paw believes that the project highlights the danger of easily accessible generative AI tools being used for state-backed propaganda. While some argue that educating users about manipulative AI-generated content or equipping browsers with AI-detection tools could mitigate the issue, Paw believes that these solutions are not effective or elegant. Disinformation researchers have long warned about the potential of AI language models being used for personalized propaganda campaigns and influencing social media users. Evidence of AI-powered disinformation campaigns has already emerged, with academic researchers uncovering a botnet powered by AI language model ChatGPT. Legitimate political campaigns, such as the Republican National Committee, have also utilized AI-generated content, including fake images. AI-generated text can still be fairly generic, but with human finesse, it becomes highly effective and difficult to detect using automated filters. OpenAI has expressed concern about its technology being utilized to create tailored automated disinformation at a large scale, and while it has updated its policies to restrict political usage, it remains a challenge to block the generation of such material effectively. As AI tools become increasingly accessible, society must become aware of their presence in politics and protect against their misuse.
AI on social media platforms, both as a tool for manipulation and for detection, is seen as a potential threat to voter sentiment in the upcoming US presidential elections, with China-affiliated actors leveraging AI-generated visual media to emphasize politically divisive topics, while companies like Accrete AI are employing AI to detect and predict disinformation threats in real-time.
Tech giants like Facebook, YouTube, and Twitter are rolling back policies aimed at curbing misinformation and falsehoods, raising concerns about their ability to tackle the upcoming global election season rife with disinformation and manipulation.
Lawmakers are pressuring social media platforms like Facebook and Instagram to explain their lack of rules to curb the harms of AI-generated political advertisements ahead of the 2024 US presidential election.
Lawmakers are calling on social media platforms, including Facebook and Twitter, to take action against AI-generated political ads that could spread election-related misinformation and disinformation, ahead of the 2024 U.S. presidential election. Google has already announced new labeling requirements for deceptive AI-generated political advertisements.
The rise of false and misleading information on social media, exacerbated by advances in artificial intelligence, has created an authenticity crisis that is eroding trust in traditional news outlets and deepening social and political divisions.
The U.S. Supreme Court has maintained a block on restrictions imposed by lower courts on President Biden's administration's ability to encourage social media companies to remove misinformation, pending further review.