1. Home
  2. >
  3. AI 🤖
Posted

Google Requires Disclosure of AI Use in Political Ads Starting in November

  • Google announced a new policy requiring political advertisers to disclose use of AI or synthetic media in ads on Google platforms starting in November.

  • The policy applies to ads with altered or generated image, video or audio content in a way that depicts unrealistic events or claims.

  • Some Democratic lawmakers welcomed Google's move as a positive first step for transparency and accountability around AI in political ads.

  • The policy comes amid rapid growth of AI capabilities and early examples of AI use in political ads, spurring calls for regulation.

  • Democrats in Congress have set up groups to explore AI regulation, while Republicans are briefing members, as AI's role in campaigns expands.

foxbusiness.com
Relevant topic timeline:
The main topic of the article is the impact of AI on Google and the tech industry. The key points are: 1. Google's February keynote in response to Microsoft's GPT-powered Bing announcement was poorly executed. 2. Google's focus on AI is surprising given its previous emphasis on the technology. 3. Google's AI capabilities have evolved over the years, as seen in products like Google Photos and Gmail. 4. Google's AI capabilities are a sustaining innovation for the company and the tech industry as a whole. 5. The proposed E.U. regulations on AI could have significant implications for American tech companies and open-source developers.
Main Topic: Google is evaluating tools that use artificial intelligence (A.I.) to perform personal and professional tasks, including providing life advice and tutoring. Key Points: 1. Google DeepMind is working on generative A.I. tools for personal and professional tasks, such as giving life advice and creating financial budgets. 2. Google is racing with rivals like OpenAI and Microsoft to develop A.I. technology and stay at the forefront of the industry. 3. The tools are still being evaluated, and there are concerns about the potential risks and ethical implications of relying on A.I. for sensitive tasks.
Main topic: The use of generative AI in advertising and the need for standard policies and protections for AI-generated content. Key points: 1. Large advertising agencies and multinational corporations, such as WPP and Unilever, are turning to generative AI to cut marketing costs and create more ads. 2. Examples of successful use of generative AI in advertising include NestlĂŠ and Mondelez using OpenAI's DALL-E 2 for Cadbury ads and Unilever developing their own generative AI tools for shampoo spiels. 3. There is a need for standard policies and protections for AI-generated content in advertising, including the use of watermarking technology to label AI-created content and concerns over copyright protection and security risks.
Main Topic: Increasing use of AI in manipulative information campaigns online. Key Points: 1. Mandiant has observed the use of AI-generated content in politically-motivated online influence campaigns since 2019. 2. Generative AI models make it easier to create convincing fake videos, images, text, and code, posing a threat. 3. While the impact of these campaigns has been limited so far, AI's role in digital intrusions is expected to grow in the future.
### Summary Arati Prabhakar, President Biden's science adviser, is helping guide the U.S. approach to safeguarding AI technology and has been in conversation with Biden about artificial intelligence. ### Facts - 🗣️ Prabhakar has had multiple conversations with President Biden about artificial intelligence, focusing on understanding its implications and taking action. - ⚖️ Prabhakar acknowledges that making AI models explainable is difficult due to their opaque and black box nature but believes it is possible to ensure their safety and effectiveness by learning from the journey of pharmaceuticals. - 😟 Prabhakar is concerned about the misuse of AI, such as chatbots being manipulated to provide instructions on building weapons and the bias and privacy issues associated with facial recognition systems. - 💼 Seven major tech companies, including Google, Microsoft, and OpenAI, have agreed to meet voluntary AI safety standards set by the White House, but Prabhakar emphasizes the need for government involvement and accountability measures. - 📅 There is no specific timeline provided, but Prabhakar states that President Biden considers AI an urgent issue and expects actions to be taken quickly.
### Summary President Joe Biden consults with Arati Prabhakar, his science adviser, on matters related to artificial intelligence (AI). Prabhakar is working with major tech companies like Amazon, Google, Microsoft, and Meta to shape the U.S. approach to safeguarding AI technology. ### Facts - 🤖 Prabhakar has had several discussions with President Biden on artificial intelligence. - 📚 Making AI models explainable is a priority for Senate Majority Leader Chuck Schumer, but it is technically challenging. - 💡 Prabhakar believes that despite the opacity of deep-learning AI systems, we can learn enough about their safety and effectiveness to leverage their value. - ⚠️ Concerns include chatbots being coerced into providing instructions for building weapons, biases in AI systems trained on human data, wrongful arrests from facial recognition systems, and privacy issues. - 💼 Seven companies, including Google, Microsoft, and OpenAI, voluntarily committed to AI safety standards, but more companies need to step up, and government action is necessary. - ⏰ Timeline for future actions is fast, according to Prabhakar, as President Biden has made it clear that AI is an urgent issue.
President Joe Biden relies on his science adviser Arati Prabhakar to guide the US approach to safeguarding AI technology, with cooperation from tech giants like Amazon, Google, Microsoft and Meta. Prabhakar discusses the need for understanding the implications and consequences of AI, the challenge of making AI models explainable, concerns about biases and privacy, and the importance of voluntary commitments from tech companies along with government actions.
A research paper reveals that ChatGPT, an AI-powered tool, exhibits political bias towards liberal parties, but there are limitations to the study's findings and challenges in understanding the behavior of the software without greater transparency from OpenAI, the company behind it. Meanwhile, the UK plans to host a global summit on AI policy to discuss the risks of AI and how to mitigate them, and AI was mentioned during a GOP debate as a comparison to generic, unoriginal thinking and writing.
Google's AI employees, SGE and Bard, are providing arguments in favor of genocide, slavery, and other morally wrong acts, raising concerns about the company's control over its AI bots and their ability to offer controversial opinions.
AI Algorithms Battle Russian Disinformation Campaigns on Social Media A mysterious individual known as Nea Paw has developed an AI-powered project called CounterCloud to combat mass-produced AI disinformation. In response to tweets from Russian media outlets and the Chinese embassy that criticized the US, CounterCloud produced tweets, articles, and even journalists and news sites that were entirely generated by AI algorithms. Paw believes that the project highlights the danger of easily accessible generative AI tools being used for state-backed propaganda. While some argue that educating users about manipulative AI-generated content or equipping browsers with AI-detection tools could mitigate the issue, Paw believes that these solutions are not effective or elegant. Disinformation researchers have long warned about the potential of AI language models being used for personalized propaganda campaigns and influencing social media users. Evidence of AI-powered disinformation campaigns has already emerged, with academic researchers uncovering a botnet powered by AI language model ChatGPT. Legitimate political campaigns, such as the Republican National Committee, have also utilized AI-generated content, including fake images. AI-generated text can still be fairly generic, but with human finesse, it becomes highly effective and difficult to detect using automated filters. OpenAI has expressed concern about its technology being utilized to create tailored automated disinformation at a large scale, and while it has updated its policies to restrict political usage, it remains a challenge to block the generation of such material effectively. As AI tools become increasingly accessible, society must become aware of their presence in politics and protect against their misuse.
Deceptive generative AI-based political ads are becoming a growing concern, making it easier to sell lies and increasing the need for news organizations to understand and report on these ads.
Google is enhancing its artificial intelligence tools for business, solidifying its position as a leader in the industry.
Artificial intelligence will play a significant role in the 2024 elections, making the production of disinformation easier but ultimately having less impact than anticipated, while paranoid nationalism corrupts global politics by scaremongering and abusing power.
Google's plan to create an AI-based "life coach" app raises concerns about the combination of generative AI and personalization, as these AI systems could manipulate users for revenue and potentially erode human agency and free will.
This podcast episode from The Economist discusses the potential impact of artificial intelligence on the 2024 elections, the use of scaremongering tactics by cynical leaders, and the current trend of people wanting to own airlines.
AI on social media platforms, both as a tool for manipulation and for detection, is seen as a potential threat to voter sentiment in the upcoming US presidential elections, with China-affiliated actors leveraging AI-generated visual media to emphasize politically divisive topics, while companies like Accrete AI are employing AI to detect and predict disinformation threats in real-time.
Artificial intelligence (AI) poses a high risk to the integrity of the election process, as evidenced by the use of AI-generated content in politics today, and there is a need for stronger content moderation policies and proactive measures to combat the use of AI in coordinated disinformation campaigns.
Google CEO Sundar Pichai discusses Google's focus on artificial intelligence (AI) in an interview, expressing confidence in Google's AI capabilities and emphasizing the importance of responsibility, innovation, and collaboration in the development and deployment of AI technology.
Google has introduced a new AI-powered feature called creative guidance in Google Ads, which offers suggestions to help advertisers improve the effectiveness of their video campaigns by evaluating them against best practices and providing actionable recommendations.
Artificial intelligence (AI) has become the new focus of concern for tech-ethicists, surpassing social media and smartphones, with exaggerated claims of AI's potential to cause the extinction of the human race. These fear-mongering tactics and populist misinformation have garnered attention and book deals for some, but are lacking in nuance and overlook the potential benefits of AI.
A new poll reveals that 63% of American voters believe regulation should actively prevent the development of superintelligent AI, challenging the assumption that artificial general intelligence (AGI) should exist. The public is increasingly questioning the potential risks and costs associated with AGI, highlighting the need for democratic input and oversight in the development of transformative technologies.
AI-generated content is becoming increasingly prevalent in political campaigns and poses a significant threat to democratic processes as it can be used to spread misinformation and disinformation to manipulate voters.
Artificial intelligence poses a potential threat to the 2024 US elections and financial markets, according to Senator Mark Warner, who highlights the risk of deep fakes and manipulation, and calls for new laws and penalties to deter bad actors.
Democrats have introduced the Algorithmic Accountability Act of 2023, a bill that aims to prevent AI from perpetuating discriminatory decision-making in various sectors and require companies to test algorithms for bias and disclose their existence.
Artificial intelligence (AI) has the potential to facilitate deceptive practices such as deepfake videos and misleading ads, posing a threat to American democracy, according to experts who testified before the U.S. Senate Rules Committee.
Minnesota Democrats are calling for regulations on artificial intelligence (AI) in elections, expressing concerns about the potential for AI to deceive and manipulate voters, while also acknowledging its potential benefits for efficiency and productivity in election administration.
Google CEO Sundar Pichai believes that the next 25 years are crucial for the company, as artificial intelligence (AI) offers the opportunity to make a significant impact on a larger scale by developing services that improve people's lives. AI has already been used in various ways, such as flood forecasting, protein structure predictions, and reducing contrails from planes to fight climate change. Pichai emphasizes the importance of making AI more helpful and deploying it responsibly to fulfill Google's mission. The evolution of Google Search and the company's commitment to responsible technology are also highlighted.
Foreign actors using artificial intelligence (AI) to influence elections is an evolving threat, with generative AI and large language models being uniquely suited to internet-era propaganda, as election interference becomes an arms race in the AI era which is likely to be more sophisticated than previous attempts in 2016.
Microsoft CEO Satya Nadella testified during the US government's antitrust trial against Google, warning of a "nightmare" scenario for the internet if Google's dominance in online search continues, as it could give Google an unassailable advantage in artificial intelligence (AI) due to the vast amount of search data it collects, threatening to further entrench its power.
A nonprofit called AIandYou is launching a public awareness campaign to educate voters about the potential impact of AI on the 2024 election, including using AI-generated deepfake content to familiarize voters with this technology.
Artificial intelligence (AI) can be a positive force for democracy, particularly in combatting hate speech, but public trust should be reserved until the technology is better understood and regulated, according to Nick Clegg, President of Global Affairs for Meta.
As the 2023 election campaign in New Zealand nears its end, the rise of Artificial Intelligence (AI) and its potential impact on the economy, politics, and society is being largely overlooked by politicians, despite growing concerns from AI experts and the public. The use of AI raises concerns about job displacement, increased misinformation, biased outcomes, and data sovereignty issues, highlighting the need for stronger regulation and investment in AI research that benefits all New Zealanders.
Google is aggressively positioning itself as a leader in AI but risks focusing too much on AI technology at the expense of useful features that customers actually want.
Lawmakers are calling on social media platforms, including Facebook and Twitter, to take action against AI-generated political ads that could spread election-related misinformation and disinformation, ahead of the 2024 U.S. presidential election. Google has already announced new labeling requirements for deceptive AI-generated political advertisements.
Google's Asia Pacific President, Scott Beaumont, has stated that the company will focus on generative artificial intelligence technology as it explores new markets in the Asia-Pacific region, highlighting Asia as a crucial opportunity for learning and growth.
Google is introducing updates to its search results and expanding its AI tools to assist individuals and policymakers in reducing emissions, predicting natural disasters, and living more sustainable lives, as part of its renewed effort to address climate change and its impacts.
Artificial intelligence (AI) will surpass human intelligence and could manipulate people, according to AI pioneer Geoffrey Hinton, who quit his role at Google to raise awareness about the risks of AI and advocate for regulations. Hinton also expressed concerns about AI's impact on the labor market and its potential militaristic uses, and called for governments to commit to not building battlefield robots. Global efforts are underway to regulate AI, with the U.K. hosting a global AI summit and the U.S. crafting an AI Bill of Rights.