1. Home
  2. >
  3. AI 🤖
Posted

Google to Require AI Disclosure in Political Ads Amid Concerns Over Deepfakes

  • Google will require visible disclosure in political ads using AI to generate images or audio. The disclosure must be clear and noticeable to users.

  • The new policy comes as campaigns like Ron DeSantis' have started using AI technology in ads, such as fake imagery of Trump hugging Fauci.

  • The Federal Election Commission also unveiled plans to regulate AI content in political ads ahead of 2024. Lawmakers like Chuck Schumer support legislation too.

  • Experts question how effective the rules will be. Some call it a "feel good" measure that won't make a real difference.

  • Google says the policy builds on past transparency efforts and will help support responsible political advertising. But the real value to viewers remains to be seen.

foxnews.com
Relevant topic timeline:
The main topic of the article is the impact of AI on Google and the tech industry. The key points are: 1. Google's February keynote in response to Microsoft's GPT-powered Bing announcement was poorly executed. 2. Google's focus on AI is surprising given its previous emphasis on the technology. 3. Google's AI capabilities have evolved over the years, as seen in products like Google Photos and Gmail. 4. Google's AI capabilities are a sustaining innovation for the company and the tech industry as a whole. 5. The proposed E.U. regulations on AI could have significant implications for American tech companies and open-source developers.
Main topic: The use of generative AI in advertising and the need for standard policies and protections for AI-generated content. Key points: 1. Large advertising agencies and multinational corporations, such as WPP and Unilever, are turning to generative AI to cut marketing costs and create more ads. 2. Examples of successful use of generative AI in advertising include NestlĂŠ and Mondelez using OpenAI's DALL-E 2 for Cadbury ads and Unilever developing their own generative AI tools for shampoo spiels. 3. There is a need for standard policies and protections for AI-generated content in advertising, including the use of watermarking technology to label AI-created content and concerns over copyright protection and security risks.
Main Topic: Increasing use of AI in manipulative information campaigns online. Key Points: 1. Mandiant has observed the use of AI-generated content in politically-motivated online influence campaigns since 2019. 2. Generative AI models make it easier to create convincing fake videos, images, text, and code, posing a threat. 3. While the impact of these campaigns has been limited so far, AI's role in digital intrusions is expected to grow in the future.
### Summary The rise of generative artificial intelligence (AI) is making it difficult for the public to differentiate between real and fake content, raising concerns about deceptive fake political content in the upcoming 2024 presidential race. However, the Content Authenticity Initiative is working on a digital standard to restore trust in online content. ### Facts - Generative AI is capable of producing hyper-realistic fake content, including text, images, audio, and video. - Tools using AI have been used to create deceptive political content, such as images of President Joe Biden in a Republican Party ad and a fabricated voice of former President Donald Trump endorsing Florida Gov. Ron DeSantis' White House bid. - The Content Authenticity Initiative, a coalition of companies, is developing a digital standard to restore trust in online content. - Truepic, a company involved in the initiative, uses camera technology to add verified content provenance information to images, helping to verify their authenticity. - The initiative aims to display "content credentials" that provide information about the history of a piece of content, including how it was captured and edited. - The hope is for widespread adoption of the standard by creators to differentiate authentic content from manipulated content. - Adobe is having conversations with social media platforms to implement the new content credentials, but no platforms have joined the initiative yet. - Experts are concerned that generative AI could further erode trust in information ecosystems and potentially impact democratic processes, highlighting the importance of industry-wide change. - Regulators and lawmakers are engaging in conversations and discussions about addressing the challenges posed by AI-generated fake content.
### Summary A debate has arisen about whether AI-generated content should be labeled as such, but Google does not require AI labeling as it values quality content regardless of its origin. Human editors and a human touch are still necessary to ensure high-quality and trustworthy content. ### Facts - Over 85% of marketers use AI in their content production workflow. - AI labeling involves indicating that a piece of content was generated using artificial intelligence. - Google places a higher emphasis on content quality rather than its origin. - The authority of the website and author is important to Google. - Google can detect AI-generated content but focuses on content quality and user intent. - Human editors are needed to verify facts and ensure high-quality content. - Google prioritizes natural language, which requires a human touch. - As AI becomes more prevalent, policies and frameworks may evolve.
### Summary Google's AI-generated search results have produced troubling answers, including justifications for slavery and genocide, and inaccurate information on various topics. ### Facts - A search for "benefits of slavery" resulted in Google's AI providing advantages of slavery, including fueling the plantation economy and funding colleges and markets. - Search terms like "benefits of genocide" prompted Google's AI to confuse arguments in favor of acknowledging genocide with arguments in favor of genocide itself. - Google's AI responded to queries about the benefits of guns with questionable statistics and dubious reasoning. - When a user searched for "how to cook Amanita ocreata," a highly poisonous mushroom, Google provided step-by-step instructions that would lead to harm instead of warning about its toxicity. - Google appears to censor certain search terms from generating AI responses while others slip through the filters. - The issue was discovered by Lily Ray, who tested search terms likely to produce problematic results. - Google's Search Generative Experience (SGE), an AI-powered search tool, is being tested in the US with limited availability. - Bing, Google's main competitor, provided more accurate and detailed responses to similar search queries related to controversial topics. - Google's SGE also displayed inaccuracies in responses related to other topics such as rock stars, CEOs, chefs, and child-rearing practices. - Large language models like Google's SGE may have inherent limitations that make it difficult to filter out problematic responses. Note: Bullets were chosen without emojis as there was no specific request for emojis in the text.
AI labeling, or disclosing that content was generated using artificial intelligence, is not deemed necessary by Google for ranking purposes; the search engine values quality content, user experience, and authority of the website and author more than the origin of the content. However, human editors are still crucial for verifying facts and adding a human touch to AI-generated content to ensure its quality, and as AI becomes more widespread, policies and frameworks around its use may evolve.
Major media organizations are calling for new laws to protect their content from being used by AI tools without permission, expressing concerns over unauthorized scraping and the potential for AI to produce false or biased information.
AI Algorithms Battle Russian Disinformation Campaigns on Social Media A mysterious individual known as Nea Paw has developed an AI-powered project called CounterCloud to combat mass-produced AI disinformation. In response to tweets from Russian media outlets and the Chinese embassy that criticized the US, CounterCloud produced tweets, articles, and even journalists and news sites that were entirely generated by AI algorithms. Paw believes that the project highlights the danger of easily accessible generative AI tools being used for state-backed propaganda. While some argue that educating users about manipulative AI-generated content or equipping browsers with AI-detection tools could mitigate the issue, Paw believes that these solutions are not effective or elegant. Disinformation researchers have long warned about the potential of AI language models being used for personalized propaganda campaigns and influencing social media users. Evidence of AI-powered disinformation campaigns has already emerged, with academic researchers uncovering a botnet powered by AI language model ChatGPT. Legitimate political campaigns, such as the Republican National Committee, have also utilized AI-generated content, including fake images. AI-generated text can still be fairly generic, but with human finesse, it becomes highly effective and difficult to detect using automated filters. OpenAI has expressed concern about its technology being utilized to create tailored automated disinformation at a large scale, and while it has updated its policies to restrict political usage, it remains a challenge to block the generation of such material effectively. As AI tools become increasingly accessible, society must become aware of their presence in politics and protect against their misuse.
Deceptive generative AI-based political ads are becoming a growing concern, making it easier to sell lies and increasing the need for news organizations to understand and report on these ads.
AI technology is making it easier and cheaper to produce mass-scale propaganda campaigns and disinformation, using generative AI tools to create convincing articles, tweets, and even journalist profiles, raising concerns about the spread of AI-powered fake content and the need for mitigation strategies.
Google's AI-generated search result summaries, which use key points from news articles, are facing criticism for potentially incentivizing media organizations to put their work behind paywalls and leading to accusations of theft. Media companies are concerned about the impact on their credibility and revenue, prompting some to seek payment from AI companies to train language models on their content. However, these generative AI models are not perfect and require user feedback to improve accuracy and avoid errors.
Google is updating its political content policy to require disclosure of all AI-generated content in election ads, with the new policy taking effect in November 2023.
AI on social media platforms, both as a tool for manipulation and for detection, is seen as a potential threat to voter sentiment in the upcoming US presidential elections, with China-affiliated actors leveraging AI-generated visual media to emphasize politically divisive topics, while companies like Accrete AI are employing AI to detect and predict disinformation threats in real-time.
Artificial intelligence (AI) poses a high risk to the integrity of the election process, as evidenced by the use of AI-generated content in politics today, and there is a need for stronger content moderation policies and proactive measures to combat the use of AI in coordinated disinformation campaigns.
China is employing artificial intelligence to manipulate American voters through the dissemination of AI-generated visuals and content, according to a report by Microsoft.
Google CEO Sundar Pichai discusses Google's focus on artificial intelligence (AI) in an interview, expressing confidence in Google's AI capabilities and emphasizing the importance of responsibility, innovation, and collaboration in the development and deployment of AI technology.
Artificial intelligence should not be used in journalism due to the potential for generating fake news, undermining the principles of journalism, and threatening the livelihood of human journalists.
Artificial intelligence (AI) has become the new focus of concern for tech-ethicists, surpassing social media and smartphones, with exaggerated claims of AI's potential to cause the extinction of the human race. These fear-mongering tactics and populist misinformation have garnered attention and book deals for some, but are lacking in nuance and overlook the potential benefits of AI.
AI-generated content is becoming increasingly prevalent in political campaigns and poses a significant threat to democratic processes as it can be used to spread misinformation and disinformation to manipulate voters.
Amazon will require publishers who use AI-generated content to disclose their use of the technology, small businesses are set to benefit from AI and cloud technologies, and President Biden warns the UN about the potential risks of AI governance, according to the latest AI technology advancements reported by Fox News.
Google's search engines are failing to block fake, AI-generated imagery from its top search results, raising concerns about misinformation and the search giant's ability to handle phony AI material.
The debate over whether government-imposed limits on AI computation would implicate the First Amendment arises as artists and creators are starting to explore the potential of AI in their work, and the question of whether there is a First Amendment right to compute becomes increasingly relevant in the context of expressive content generated by AI.
Google is using romance novels to humanize its natural language AI, reaching AI singularity could restore our sense of wonder, machines writing ad copy raises concern for the creative class, and AI has implications for education, crime prevention, and warfare among other domains.
Artificial intelligence (AI) has the potential to facilitate deceptive practices such as deepfake videos and misleading ads, posing a threat to American democracy, according to experts who testified before the U.S. Senate Rules Committee.
The proliferation of fake news generated by AI algorithms poses a threat to media outlets and their ability to differentiate between true and false information, highlighting the need for human curation and the potential consequences of relying solely on algorithms.
Minnesota Democrats are calling for regulations on artificial intelligence (AI) in elections, expressing concerns about the potential for AI to deceive and manipulate voters, while also acknowledging its potential benefits for efficiency and productivity in election administration.
Foreign actors are increasingly using artificial intelligence, including generative AI and large language models, to produce and distribute disinformation during elections, posing a new and evolving threat to democratic processes worldwide. As elections in various countries are approaching, the effectiveness and impact of AI-produced propaganda remain uncertain, highlighting the need for efforts to detect and combat such disinformation campaigns.
Artificial intelligence should not be used in journalism, particularly in generating opinion pieces, as AI lacks the ability to understand nuances, make moral judgments, respect rights and dignity, adhere to ethical standards, and provide context and analysis, which are all essential for good journalism. Additionally, AI-generated content would be less engaging and informative for readers and could potentially promote harmful or biased ideas.
A nonprofit called AIandYou is launching a public awareness campaign to educate voters about the potential impact of AI on the 2024 election, including using AI-generated deepfake content to familiarize voters with this technology.
Artificial intelligence (AI) can be a positive force for democracy, particularly in combatting hate speech, but public trust should be reserved until the technology is better understood and regulated, according to Nick Clegg, President of Global Affairs for Meta.
China's use of artificial intelligence (AI) to manipulate social media and shape global public opinion poses a growing threat to democracies, as generative AI allows for the creation of more effective and believable content at a lower cost, with implications for the 2024 elections.
Google is aggressively positioning itself as a leader in AI but risks focusing too much on AI technology at the expense of useful features that customers actually want.
Lawmakers are calling on social media platforms, including Facebook and Twitter, to take action against AI-generated political ads that could spread election-related misinformation and disinformation, ahead of the 2024 U.S. presidential election. Google has already announced new labeling requirements for deceptive AI-generated political advertisements.