In this episode of the "Have a Nice Future" podcast, Gideon Lichfield and Lauren Goode interview Mustafa Suleyman, the co-founder of DeepMind and InflectionAI. The main topic of discussion is Suleyman's new book, "The Coming Wave," which examines the potential impact of AI and other technologies on society and governance. Key points discussed include Suleyman's concern that AI proliferation could undermine nation-states and increase inequality, the potential for AI to help lift people out of poverty, and the need for better AI assessment tools.
### Summary
Artificial intelligence (AI) is a transformative technology that will reshape politics, economies, and societies, but it also poses significant challenges and risks. To effectively govern AI, policymakers should adopt a new governance framework that is precautionary, agile, inclusive, impermeable, and targeted. This framework should be built upon common principles and encompass three overlapping governance regimes: one for establishing facts and advising governments, one for preventing AI arms races, and one for managing disruptive forces. Additionally, global AI governance must move past traditional conceptions of sovereignty and invite technology companies to participate in rule-making processes.
### Facts
- **AI Progression**: AI systems have been evolving rapidly and possess the potential to self-improve and achieve quasi-autonomy. Models with trillions of parameters and brain-scale models could be viable within a few years.
- **Dual Use**: AI is dual-use, meaning it has both military and civilian applications. The boundaries between the two are blurred, and AI can be used to create and spread misinformation, conduct surveillance, and produce powerful weapons.
- **Accessible and Proliferation Risks**: AI has become increasingly accessible and proliferated, making regulatory efforts challenging. The ease of copying AI algorithms and models poses proliferation risks, as well as the potential for misuse and unintended consequences.
- **Shift in Global Power**: AI's advancement and geopolitical competition in AI supremacy are shifting the structure and balance of global power. Technology companies are becoming powerful actors in the digital realm, challenging the authority of nation-states.
- **Inadequate Governance**: Current regulatory efforts are insufficient to govern AI effectively. There is a need for a new governance framework that is agile, inclusive, and targeted to address the unique challenges posed by AI.
- **Principles for AI Governance**: Precaution, agility, inclusivity, impermeability, and targeting are key principles for AI governance. These principles should guide the development of granular regulatory frameworks.
- **Three Overlapping Governance Regimes**: Policy frameworks should include a regime for fact-finding, advising governments on AI risks; a regime for preventing AI arms races through international cooperation and monitoring; and a regime for managing disruptive forces and crises related to AI.
### Emoji
:robot:
### Summary
The rapid advancement of artificial intelligence (AI) presents both beneficial possibilities and concerning risks, as experts warn about potential negative impacts including the threat of extinction. Government and industry efforts are being made to manage these risks and regulate AI technology, while also addressing concerns about misinformation, bias, and the need for societal literacy in understanding AI.
### Facts
- The use of AI is rapidly growing in various areas such as health care, the workplace, education, arts, and entertainment.
- The Center for AI Safety (CAIS) issued a warning signed by hundreds of individuals, including tech industry leaders and scientists, about the need to prioritize mitigating the risks of AI alongside global-scale dangers like pandemics and nuclear war.
- CEO of OpenAI, Sam Altman, expressed both the benefits and concerns associated with AI technology, emphasizing the need for serious consideration of its risks.
- Some experts believe that the warnings about potential risks from AI are more long-term scenarios rather than immediate doomsday situations, and caution against the hype surrounding AI.
- The National Council of State Legislatures is working on regulating AI at the state level, with several states already introducing AI bills and forming advisory groups.
- State legislators aim to define responsible AI utilization by governments and protect constituents engaging with AI in the private sector.
- The federal government is establishing National Artificial Intelligence Research Institutes to invest in long-term AI research.
- Misinformation and disinformation are concerns related to AI, as certain AI algorithms can generate biased and inaccurate information.
- OpenAI acknowledges the potential for AI tools to contribute to disinformation campaigns and is collaborating with researchers and industry peers to address this issue.
- The NCSL report highlights the need for policymakers to understand the programming decisions behind AI systems and their potential impact on citizens.
- Society lacks the ability and literacy to distinguish truth from false information, leading to the proliferation and belief in generative misinformation.
### 🤖 AI
- The use of artificial intelligence is rapidly advancing across various fields.
- Concerns have been raised about the potential risks and negative impacts of AI.
- Government and industry efforts are underway to manage AI risks and regulate the technology.
- Misinformation, bias, and the lack of societal literacy in understanding AI are additional challenges.
### Summary
President Joe Biden seeks guidance from his science adviser, Arati Prabhakar, on artificial intelligence (AI) and is focused on understanding its implications. Prabhakar emphasizes the importance of taking action to harness the value of AI while addressing its risks.
### Facts
- President Biden has had multiple discussions with Arati Prabhakar regarding artificial intelligence.
- Prabhakar highlights that AI models' lack of explainability is a technical feature of deep-learning systems, but asserts that explainability is not always necessary for effective use and safety, using the example of pharmaceuticals.
- Prabhakar expresses concerns about AI applications, including the inappropriate use of chatbots to obtain information on building weapons, biases in AI systems trained on human data, and privacy issues arising from the accumulation of personal data.
- Several major American tech firms have made voluntary commitments to meet AI safety standards set by the White House, but more participation and government action are needed.
- The Biden administration is actively considering measures to address AI accountability but has not provided a specific timeline.
### Related Emoji
- 🤖: Represents artificial intelligence and technology.
- 🗣️: Represents communication and dialogue.
- ⚠️: Represents risks and concerns.
- 📱: Represents privacy and data security.
- ⏳: Represents urgency and fast action.
Mustafa Suleyman, co-founder of DeepMind and InflectionAI, explains in his new book, "The Coming Wave," why our systems are ill-equipped to handle the advancements in technology, including the potential threats posed by AI and the need for better assessment methods.
Google DeepMind has commissioned 13 artists to create diverse and accessible art and imagery that aims to change the public’s perception of AI, countering the unrealistic and misleading stereotypes often used to represent the technology. The artwork visualizes key themes related to AI, such as artificial general intelligence, chip design, digital biology, large image models, language models, and the synergy between neuroscience and AI, and it is openly available for download.
Minnesota's Secretary of State, Steve Simon, expresses concern over the potential impact of AI-generated deepfakes on elections, as they can spread false information and distort reality, prompting the need for new laws and enforcement measures.
The proliferation of deepfake videos and audio, fueled by the AI arms race, is impacting businesses by increasing the risk of fraud, cyberattacks, and reputational damage, according to a report by KPMG. Scammers are using deepfakes to deceive people, manipulate company representatives, and swindle money from firms, highlighting the need for vigilance and cybersecurity measures in the face of this threat.
Experts at UVA Center for Politics warn about the risks of using artificial intelligence and deepfakes to manipulate elections.
Artificial intelligence (AI) tools can put human rights at risk, as highlighted by researchers from Amnesty International on the Me, Myself, and AI podcast, who discuss scenarios in which AI is used to track activists and make automated decisions that can lead to discrimination and inequality, emphasizing the need for human intervention and changes in public policy to address these issues.
AI technology is making it easier and cheaper to produce mass-scale propaganda campaigns and disinformation, using generative AI tools to create convincing articles, tweets, and even journalist profiles, raising concerns about the spread of AI-powered fake content and the need for mitigation strategies.
The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
Artificial intelligence will play a significant role in the 2024 elections, making the production of disinformation easier but ultimately having less impact than anticipated, while paranoid nationalism corrupts global politics by scaremongering and abusing power.
AI-generated deepfakes have the potential to manipulate elections, but research suggests that the polarized state of American politics may actually inoculate voters against misinformation regardless of its source.
The rapid advancement of AI technology poses significant challenges for democratic societies, including the need for nuanced debates, public engagement, and ethical considerations in regulating AI to mitigate unintended consequences.
Artificial intelligence (AI) poses a high risk to the integrity of the election process, as evidenced by the use of AI-generated content in politics today, and there is a need for stronger content moderation policies and proactive measures to combat the use of AI in coordinated disinformation campaigns.
Deepfake audio technology, which can generate realistic but false recordings, poses a significant threat to democratic processes by enabling underhanded political tactics and the spread of disinformation, with experts warning that it will be difficult to distinguish between real and fake recordings and that the impact on partisan voters may be minimal. While efforts are being made to develop proactive standards and detection methods to mitigate the damage caused by deepfakes, the industry and governments face challenges in regulating their use effectively, and the widespread dissemination of disinformation remains a concern.
AI has the potential to fundamentally change governments and society, with AI-powered companies and individuals usurping traditional institutions and creating a new world order, warns economist Samuel Hammond. Traditional governments may struggle to regulate AI and keep pace with its advancements, potentially leading to a loss of global power for these governments.
China's targeted and iterative approach to regulating artificial intelligence (AI) could provide valuable lessons for the United States, despite ideological differences, as the U.S. Congress grapples with comprehensive AI legislation covering various issues like national security, job impact, and democratic values. Learning from China's regulatory structure and process can help U.S. policymakers respond more effectively to the challenges posed by AI.
Deepfakes, which are fake videos or images created by AI, pose a real risk to markets, as they can manipulate financial markets and target businesses with scams; however, the most significant negative impact lies in the creation of deepfake pornography, particularly non-consensual explicit content, which causes emotional and physical harm to victims and raises concerns about privacy, consent, and exploitation.
Adversaries and criminal groups are exploiting artificial intelligence (AI) technology to carry out malicious activities, according to FBI Director Christopher Wray, who warned that while AI can automate tasks for law-abiding citizens, it also enables the creation of deepfakes and malicious code, posing a threat to US citizens. The FBI is working to identify and track those misusing AI, but is cautious about using it themselves. Other US security agencies, however, are already utilizing AI to combat various threats, while concerns about China's use of AI for misinformation and propaganda are growing.
President Biden has called for the governance of artificial intelligence to ensure it is used as a tool of opportunity and not as a weapon of oppression, emphasizing the need for international collaboration and regulation in this area.
Governments worldwide are grappling with the challenge of regulating artificial intelligence (AI) technologies, as countries like Australia, Britain, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the United Nations, and the United States take steps to establish regulations and guidelines for AI usage.
A new poll reveals that 63% of American voters believe regulation should actively prevent the development of superintelligent AI, challenging the assumption that artificial general intelligence (AGI) should exist. The public is increasingly questioning the potential risks and costs associated with AGI, highlighting the need for democratic input and oversight in the development of transformative technologies.
AI-generated content is becoming increasingly prevalent in political campaigns and poses a significant threat to democratic processes as it can be used to spread misinformation and disinformation to manipulate voters.
Artificial intelligence poses a potential threat to the 2024 US elections and financial markets, according to Senator Mark Warner, who highlights the risk of deep fakes and manipulation, and calls for new laws and penalties to deter bad actors.
While many experts are concerned about the existential risks posed by AI, Mustafa Suleyman, cofounder of DeepMind, believes that the focus should be on more practical issues like regulation, privacy, bias, and online moderation. He is confident that governments can effectively regulate AI by applying successful frameworks from past technologies, although critics argue that current internet regulations are flawed and insufficiently hold big tech companies accountable. Suleyman emphasizes the importance of limiting AI's ability to improve itself and establishing clear boundaries and oversight to ensure enforceable laws. Several governments, including the European Union and China, are already working on AI regulations.
There is a need for more policy balance in discussions about artificial intelligence (AI) to focus on the potential for good and how to ensure societal benefit, as AI has the potential to advance education, national security, and economic success, while also providing new economic opportunities and augmenting human capabilities.
AI-generated images have the potential to create alternative history and misinformation, raising concerns about their impact on elections and people's ability to discern truth from manipulated visuals.
Deepfake images and videos created by AI are becoming increasingly prevalent, posing significant threats to society, democracy, and scientific research as they can spread misinformation and be used for malicious purposes; researchers are developing tools to detect and tag synthetic content, but education, regulation, and responsible behavior by technology companies are also needed to address this growing issue.
The battle for the future of AI is not just a debate about the technology, but also about control, power, and how resources should be distributed, with factions divided by ideologies and motives, including concerns about existential risks, present-day harms, and national security.
Artificial intelligence (AI) has the potential to facilitate deceptive practices such as deepfake videos and misleading ads, posing a threat to American democracy, according to experts who testified before the U.S. Senate Rules Committee.
Sen. Mark Warner, a U.S. Senator from Virginia, is urging Congress to take a less ambitious approach to regulating artificial intelligence (AI), suggesting that lawmakers should focus on narrowly focused issues rather than trying to address the full spectrum of AI risks with a single comprehensive law. Warner believes that tackling immediate concerns, such as AI-generated deepfakes, is a more realistic and effective approach to regulation. He also emphasizes the need for bipartisan agreement and action to demonstrate progress in the regulation of AI, especially given Congress's previous failures in addressing issues related to social media.
Minnesota Democrats are calling for regulations on artificial intelligence (AI) in elections, expressing concerns about the potential for AI to deceive and manipulate voters, while also acknowledging its potential benefits for efficiency and productivity in election administration.
A nonprofit called AIandYou is launching a public awareness campaign to educate voters about the potential impact of AI on the 2024 election, including using AI-generated deepfake content to familiarize voters with this technology.
AI-generated disinformation poses a significant threat to elections and democracies worldwide, as the line between fact and fiction becomes increasingly blurred.
Deepfake videos featuring celebrities like Gayle King, Tom Hanks, and Elon Musk have prompted concerns about the misuse of AI technology, leading to calls for legislation and ethical considerations in their creation and dissemination. Celebrities have denounced these AI-generated videos as inauthentic and misleading, emphasizing the need for legal protection and labeling of such content.
Artificial intelligence (AI) will surpass human intelligence and could manipulate people, according to AI pioneer Geoffrey Hinton, who quit his role at Google to raise awareness about the risks of AI and advocate for regulations. Hinton also expressed concerns about AI's impact on the labor market and its potential militaristic uses, and called for governments to commit to not building battlefield robots. Global efforts are underway to regulate AI, with the U.K. hosting a global AI summit and the U.S. crafting an AI Bill of Rights.
U.K. startup Yepic AI, which claims to use "deepfakes for good," violated its own ethics policy by creating and sharing deepfaked videos of a TechCrunch reporter without their consent. They have now stated that they will update their ethics policy.
A coalition of Democrats is urging President Biden to turn non-binding safeguards on artificial intelligence (AI) into policy through an executive order, using the "AI Bill of Rights" as a guide.
AI technology has advanced rapidly, bringing both positive and negative consequences such as improved accuracy and potential risks to the economy, national security, and various industries, requiring government regulation and ethical considerations to prevent misuse and protect human values.
American venture capitalist Tim Draper warns that scammers are using AI to create deepfake videos and voices in order to scam crypto users.
DeepMind released a paper proposing a framework for evaluating the societal and ethical risks of AI systems ahead of the AI Safety Summit, addressing the need for transparency and examination of AI systems at the "point of human interaction" and the ways in which these systems might be used and embedded in society.