### Summary
The article discusses the rapid advancement and potential risks of artificial intelligence (AI) and proposes the idea of nationalizing certain aspects of AI under a governing body called the Humane AI Commission to ensure AI is aligned with human interests.
### Facts
- AI is evolving rapidly and penetrating various aspects of American life, from image recognition to healthcare.
- AI has the potential to bring both significant benefits and risks to society.
- Transparency in AI is limited, and understanding how specific AI works is difficult.
- Congress is becoming more aware of the importance of AI and its need for regulation.
- The author proposes the creation of a governing body, the Humane AI Commission, that can control and steer AI technology to serve humanity's best interests.
- The nationalization of advanced AI models could be considered, similar to the Atomic Energy Commission's control over nuclear reactors.
- Various options, such as an AI pause or leaving AI development to the free market or current government agencies, have limitations in addressing the potential risks of AI.
- The author suggests that the United States should take a bold executive leadership approach to develop a national AI plan and ensure global AI leadership with a focus on benevolence and human-controlled AI.
### 🤖 AI Nationalization - The case to nationalize the “nuclear reactors” of AI — the world’s most advanced AI models — hinges on this question: Who do we want to control AI’s nuclear codes? Big Tech CEOs answering to a few billionaire shareholders, or the government of the United States, answering to its citizens?
### 👥 Humane AI Commission - The author proposes the creation of a Humane AI Commission, run by AI experts, to steer and control AI technology in alignment with human interests.
### ⚠️ Risks of AI - AI's rapid advancement and lack of transparency pose risks such as unpredictable behavior, potential damage to power generation, financial markets, and public health, and the potential for AI to move beyond human control.
### ⚖️ AI Regulation - The article calls for federal regulation of AI, but emphasizes the limitations of traditional regulation in addressing the fast-evolving nature of AI and the need for a larger-scale approach like nationalization.
### Summary
Artificial intelligence (AI) is a transformative technology that will reshape politics, economies, and societies, but it also poses significant challenges and risks. To effectively govern AI, policymakers should adopt a new governance framework that is precautionary, agile, inclusive, impermeable, and targeted. This framework should be built upon common principles and encompass three overlapping governance regimes: one for establishing facts and advising governments, one for preventing AI arms races, and one for managing disruptive forces. Additionally, global AI governance must move past traditional conceptions of sovereignty and invite technology companies to participate in rule-making processes.
### Facts
- **AI Progression**: AI systems have been evolving rapidly and possess the potential to self-improve and achieve quasi-autonomy. Models with trillions of parameters and brain-scale models could be viable within a few years.
- **Dual Use**: AI is dual-use, meaning it has both military and civilian applications. The boundaries between the two are blurred, and AI can be used to create and spread misinformation, conduct surveillance, and produce powerful weapons.
- **Accessible and Proliferation Risks**: AI has become increasingly accessible and proliferated, making regulatory efforts challenging. The ease of copying AI algorithms and models poses proliferation risks, as well as the potential for misuse and unintended consequences.
- **Shift in Global Power**: AI's advancement and geopolitical competition in AI supremacy are shifting the structure and balance of global power. Technology companies are becoming powerful actors in the digital realm, challenging the authority of nation-states.
- **Inadequate Governance**: Current regulatory efforts are insufficient to govern AI effectively. There is a need for a new governance framework that is agile, inclusive, and targeted to address the unique challenges posed by AI.
- **Principles for AI Governance**: Precaution, agility, inclusivity, impermeability, and targeting are key principles for AI governance. These principles should guide the development of granular regulatory frameworks.
- **Three Overlapping Governance Regimes**: Policy frameworks should include a regime for fact-finding, advising governments on AI risks; a regime for preventing AI arms races through international cooperation and monitoring; and a regime for managing disruptive forces and crises related to AI.
### Emoji
:robot:
The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
Nvidia's processors could be used as a leverage for the US to impose its regulations on AI globally, according to Mustafa Suleyman, co-founder of DeepMind and Inflection AI. However, Washington is lagging behind Europe and China in terms of AI regulation.
The U.K. has outlined its priorities for the upcoming global AI summit, with a focus on risk and policy to regulate the technology and ensure its safe development for the public good.
A survey of 213 computer science professors suggests that a new federal agency should be created in the United States to govern artificial intelligence (AI), while the majority of respondents believe that AI will be capable of performing less than 20% of tasks currently done by humans.
The rivalry between the US and China over artificial intelligence (AI) is intensifying as both countries compete for dominance in the emerging field, but experts suggest that cooperation on certain issues is necessary to prevent conflicts and ensure global governance of AI. While tensions remain high and trust is lacking, potential areas of cooperation include AI safety and regulations. However, failure to cooperate could increase the risk of armed conflict and hinder the exploration and governance of AI.
The G20 member nations have pledged to use artificial intelligence (AI) in a responsible manner, addressing concerns such as data protection, biases, human oversight, and ethics, while also planning for the future of cryptocurrencies and central bank digital currencies (CBDCs).
Countries around the world, including Australia, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the UK, the UN, and the US, are taking various steps to regulate artificial intelligence (AI) technologies and address concerns related to privacy, security, competition, and governance.
Eight big tech companies, including Adobe, IBM, Salesforce, and Nvidia, have pledged to conduct more testing and research on the risks of artificial intelligence (AI) in a meeting with White House officials, signaling a "bridge" to future government action on the issue. These voluntary commitments come amidst congressional scrutiny and ongoing efforts by the White House to develop policies for AI.
AI has the potential to fundamentally change governments and society, with AI-powered companies and individuals usurping traditional institutions and creating a new world order, warns economist Samuel Hammond. Traditional governments may struggle to regulate AI and keep pace with its advancements, potentially leading to a loss of global power for these governments.
EU digital boss Vera Jourova will propose the creation of a global governing body for artificial intelligence (AI) during her trip to China, aiming to address the risks associated with the rapid development of AI technology and involve Beijing in global discussions on this topic.
Representatives from several countries and companies announced commitments to harness the power of artificial intelligence (AI) to advance progress in achieving the United Nations' Sustainable Development Goals (SDGs) during a ministerial side event at the United Nations' 78th Session High Level Week. These commitments focused on using AI to address issues related to health, education, food security, energy, and climate action, with an emphasis on inclusive and responsible governance of AI.
President Biden has called for the governance of artificial intelligence to ensure it is used as a tool of opportunity and not as a weapon of oppression, emphasizing the need for international collaboration and regulation in this area.
Governments worldwide are grappling with the challenge of regulating artificial intelligence (AI) technologies, as countries like Australia, Britain, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the United Nations, and the United States take steps to establish regulations and guidelines for AI usage.
The United States must prioritize global leadership in artificial intelligence (AI) and win the platform competition with China in order to protect national security, democracy, and economic prosperity, according to Ylli Bajraktari, the president and CEO of the Special Competitive Studies Project and former Pentagon official.
Amazon will require publishers who use AI-generated content to disclose their use of the technology, small businesses are set to benefit from AI and cloud technologies, and President Biden warns the UN about the potential risks of AI governance, according to the latest AI technology advancements reported by Fox News.
While many experts are concerned about the existential risks posed by AI, Mustafa Suleyman, cofounder of DeepMind, believes that the focus should be on more practical issues like regulation, privacy, bias, and online moderation. He is confident that governments can effectively regulate AI by applying successful frameworks from past technologies, although critics argue that current internet regulations are flawed and insufficiently hold big tech companies accountable. Suleyman emphasizes the importance of limiting AI's ability to improve itself and establishing clear boundaries and oversight to ensure enforceable laws. Several governments, including the European Union and China, are already working on AI regulations.
Wikipedia founder Jimmy Wales believes that regulating artificial intelligence (AI) is not feasible and compares the idea to "magical thinking," stating that many politicians lack a strong understanding of AI and its potential. While the UN is establishing a panel to investigate global regulation of AI, some experts, including physicist Reinhard Scholl, emphasize the need for regulation to prevent the misuse of AI by bad actors, while others, like Robert Opp, suggest forming a regulatory body similar to the International Civil Aviation Organisation. However, Wales argues that regulating individual developers using freely available AI software is impractical.
Deputy Prime Minister Oliver Dowden will warn the UN that artificial intelligence (AI) poses a threat to world order unless governments take action, with fears that the rapid pace of AI development could lead to job losses, misinformation, and discrimination without proper regulations in place. Dowden will call for global regulation and emphasize the importance of making rules in parallel with AI development rather than retroactively. Despite the need for regulation, experts note the complexity of reaching a quick international agreement, with meaningful input needed from smaller countries, marginalized communities, and ethnic minorities. The UK aims to take the lead in AI regulation, but there are concerns that without swift action, the European Union's AI Act could become the global standard instead.
Israeli Prime Minister Benjamin Netanyahu warned of the potential dangers of artificial intelligence (AI) and called for responsible and ethical development of AI during his speech at the United Nations General Assembly, emphasizing that nations must work together to prevent the perils of AI and ensure it brings more freedom and benefits humanity.
Summary: To ensure ethical and responsible adoption of AI technology, organizations should establish an AI ethics advisor, stay updated on regulations, invest in AI training, and collaborate with an AI consortium.
The United Nations aims to bring inclusiveness, legitimacy, and authority to the regulation of artificial intelligence, leveraging its experience with managing the impact of various technologies and creating compliance pressure for commitments made by governments, according to Amandeep Gill, the organization's top tech-policy official. Despite the challenges of building consensus and engaging stakeholders, the U.N. seeks to promote diverse and inclusive innovation to ensure equal opportunities and prevent concentration of economic power. Gill also emphasizes the potential of AI in accelerating progress towards the Sustainable Development Goals but expresses concerns about potential misuse and concentration of power.
Artificial intelligence (AI) will have a significant impact on geopolitics and globalization, driving a new globalization but also posing risks that the world is not yet ready for, according to political scientist Ian Bremmer. Global leaders and policymakers are now catching up and discussing the implications of AI, but a greater understanding of the technology is needed for effective regulation. Bremmer suggests international cooperation, such as a United Nations-driven process, to establish global oversight and prevent the U.S. versus China competition in AI development.
Artificial intelligence (AI) has the potential to disrupt industries and requires the attention of boards of directors to consider the strategic implications, risks, compliance, and governance issues associated with its use.
China's use of artificial intelligence (AI) for surveillance and oppression should deter the United States from collaborating with China on AI development and instead focus on asserting itself in international standards-setting bodies, open sourcing AI technologies, and promoting explainable AI to ensure transparency and uphold democratic values.
Advisers to UK Chancellor Rishi Sunak are working on a statement to be used in a communique at the AI safety summit next month, although they are unlikely to reach an agreement on establishing a new international organisation to oversee AI. The summit will focus on the risks of AI models, debate national security agencies' scrutiny of dangerous versions of the technology, and discuss international cooperation on AI that poses a threat to human life.
Artificial intelligence (AI) will surpass human intelligence and could manipulate people, according to AI pioneer Geoffrey Hinton, who quit his role at Google to raise awareness about the risks of AI and advocate for regulations. Hinton also expressed concerns about AI's impact on the labor market and its potential militaristic uses, and called for governments to commit to not building battlefield robots. Global efforts are underway to regulate AI, with the U.K. hosting a global AI summit and the U.S. crafting an AI Bill of Rights.
Governments around the world are considering AI regulations to address concerns such as misinformation, job loss, and the misuse of AI technologies, with different approaches taken by countries like the US, UK, EU, China, Japan, Brazil, and Israel.
China should seize the emerging opportunities in artificial intelligence (AI) to reshape global power dynamics and establish a new "international pattern and order," as AI is expected to bring deep economic and societal changes and determine the future shape of global economics. By mastering AI innovation and its applications, along with data, computing, and algorithms, a country can disrupt the existing global power balance, according to a report by the People's Daily research unit. China has been actively pursuing AI development while also implementing regulations to govern its use and mitigate risks.
The UK government's Tech Secretary, Michelle Donelan, has dismissed claims that the UK aims to establish a global regulator for artificial intelligence, stating that the upcoming AI safety summit will instead focus on international collaboration and risk management frameworks.
China has launched an AI framework called the Global AI Governance Initiative, urging equal rights and opportunities for all nations, in response to the United States' restrictions on access to advanced chips and chipmaking tools, as both countries compete for leadership in setting global AI rules and standards.
The United Kingdom will host an international summit on artificial intelligence safety in November 2023, focusing on the potential existential threat of AI and establishing the country as a mediator in technology post-Brexit. British Prime Minister Rishi Sunak, along with Vice President Kamala Harris and other distinguished guests, aims to initiate a global conversation on AI regulation and address concerns about its misuse.
The UK government's global summit on AI governance, scheduled for November 1 and 2, is expected to be underwhelming and exclude important players in the UK AI industry, leading to concerns about the country falling behind in the development and regulation of AI technology.
The risks posed by artificial intelligence must be treated as seriously as the climate crisis, and immediate action is needed to address those risks, according to Demis Hassabis, the CEO of Google's AI unit. Hassabis suggests that oversight of the AI industry could start with a body similar to the Intergovernmental Panel on Climate Change (IPCC).
A group of 24 AI experts, including Geoffrey Hinton and Yoshua Bengio, have released a paper urging governments to take action in managing the risks associated with AI, particularly extreme risks posed by advanced systems, and have made policy recommendations to promote safe and ethical use of AI.
The United Nations Secretary-General has formed a 39-member advisory body comprised of tech executives, government officials, and academics, to address issues related to the international governance of artificial intelligence and the potential risks and challenges associated with it.
Summary: The UN is convening a multi-stakeholder High-level Advisory Body on AI to analyze and provide recommendations for the international governance of AI, aiming to align governance with human rights and the Sustainable Development Goals. The Body comprises experts from various sectors and will consult with existing initiatives and international organizations.
The United Nations has announced the creation of a 39-member advisory body to address issues in the international governance of artificial intelligence.