TikTok is looking to enhance its social networking features and expand its messaging capabilities in order to build meaningful connections between users and retain and grow user engagement.
Deepfakes, which are fake videos or images created by AI, pose a real risk to markets, as they can manipulate financial markets and target businesses with scams; however, the most significant negative impact lies in the creation of deepfake pornography, particularly non-consensual explicit content, which causes emotional and physical harm to victims and raises concerns about privacy, consent, and exploitation.
TikTok is launching a new tool that allows creators to label their AI-generated content and is testing automatic labeling of AI content, aiming to increase transparency and prevent confusion or misinformation.
AI-generated deepfakes pose serious challenges for policymakers, as they can be used for political propaganda, incite violence, create conflicts, and undermine democracy, highlighting the need for regulation and control over AI technology.
Deepfake images and videos created by AI are becoming increasingly prevalent, posing significant threats to society, democracy, and scientific research as they can spread misinformation and be used for malicious purposes; researchers are developing tools to detect and tag synthetic content, but education, regulation, and responsible behavior by technology companies are also needed to address this growing issue.
Artificial intelligence (AI) has the potential to facilitate deceptive practices such as deepfake videos and misleading ads, posing a threat to American democracy, according to experts who testified before the U.S. Senate Rules Committee.
Creators on TikTok were approached by a company called StartupHelper, which offered a brand deal to promote a service that would create AI clones of people to attend virtual job interviews, but the offer quickly raised ethical concerns and the company went dark online.
A nonprofit called AIandYou is launching a public awareness campaign to educate voters about the potential impact of AI on the 2024 election, including using AI-generated deepfake content to familiarize voters with this technology.
YouTube star MrBeast, also known as Jimmy Donaldson, questions if social media platforms are equipped to handle the rise of fake AI ads after a deepfake scam ad featuring him offering $2 iPhones appeared on TikTok.
Tom Hanks warns that an AI-powered dental plan advert featuring him is a deepfake, highlighting the growing concern of AI-generated fake content and its impact on industries such as entertainment and politics.
Deepfake videos featuring celebrities like Gayle King, Tom Hanks, and Elon Musk have prompted concerns about the misuse of AI technology, leading to calls for legislation and ethical considerations in their creation and dissemination. Celebrities have denounced these AI-generated videos as inauthentic and misleading, emphasizing the need for legal protection and labeling of such content.
Deepfake AI technology is posing a new threat in the Israel-Gaza conflict, as it allows for the creation of manipulated videos that can spread misinformation and alter public perception. This has prompted media outlets like CBS to develop capabilities to handle deepfakes, but many still underestimate the extent of the threat. Israeli startup Clarity, which focuses on AI Collective Intelligence Engine, is working to tackle the deepfake challenge and protect against the potential manipulation of public opinion.
In 2024, social media trends suggest prioritizing user-generated content, leveraging AI tools for content creation, focusing on TikTok and LinkedIn as top-performing platforms, and shifting social media KPI focus towards engagement and shares to monitor authenticity.