Main Topic: The Biden administration's plan to issue an executive order restricting U.S. investment in high-tech industries in China.
Key Points:
1. The executive order will target specific high-tech sectors in China, such as quantum computing, artificial intelligence, and advanced semi-conductors.
2. The order is part of growing tensions between the U.S. and China.
3. The administration had previously delayed certain punitive economic measures against China but denies delaying actions for national security reasons.
### Summary
The article discusses the rapid advancement and potential risks of artificial intelligence (AI) and proposes the idea of nationalizing certain aspects of AI under a governing body called the Humane AI Commission to ensure AI is aligned with human interests.
### Facts
- AI is evolving rapidly and penetrating various aspects of American life, from image recognition to healthcare.
- AI has the potential to bring both significant benefits and risks to society.
- Transparency in AI is limited, and understanding how specific AI works is difficult.
- Congress is becoming more aware of the importance of AI and its need for regulation.
- The author proposes the creation of a governing body, the Humane AI Commission, that can control and steer AI technology to serve humanity's best interests.
- The nationalization of advanced AI models could be considered, similar to the Atomic Energy Commission's control over nuclear reactors.
- Various options, such as an AI pause or leaving AI development to the free market or current government agencies, have limitations in addressing the potential risks of AI.
- The author suggests that the United States should take a bold executive leadership approach to develop a national AI plan and ensure global AI leadership with a focus on benevolence and human-controlled AI.
### 🤖 AI Nationalization - The case to nationalize the “nuclear reactors” of AI — the world’s most advanced AI models — hinges on this question: Who do we want to control AI’s nuclear codes? Big Tech CEOs answering to a few billionaire shareholders, or the government of the United States, answering to its citizens?
### 👥 Humane AI Commission - The author proposes the creation of a Humane AI Commission, run by AI experts, to steer and control AI technology in alignment with human interests.
### ⚠️ Risks of AI - AI's rapid advancement and lack of transparency pose risks such as unpredictable behavior, potential damage to power generation, financial markets, and public health, and the potential for AI to move beyond human control.
### ⚖️ AI Regulation - The article calls for federal regulation of AI, but emphasizes the limitations of traditional regulation in addressing the fast-evolving nature of AI and the need for a larger-scale approach like nationalization.
Congress should prioritize maintaining bipartisan commitment to AI, generating global AI guardrails, and seeking out local perspectives in order to develop effective and responsible AI policies.
The U.S. is falling behind in regulating artificial intelligence (AI), while Europe has passed the world's first comprehensive AI law; President Joe Biden recently met with industry leaders to discuss the need for AI regulation and companies pledged to develop safeguards for AI-generated content and prioritize user privacy.
The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
Several tech giants in the US, including Alphabet, Microsoft, Meta Platforms, and Amazon, have pledged to collaborate with the Biden administration to address the risks associated with artificial intelligence, focusing on safety, security, and trust in AI development.
A survey of 213 computer science professors suggests that a new federal agency should be created in the United States to govern artificial intelligence (AI), while the majority of respondents believe that AI will be capable of performing less than 20% of tasks currently done by humans.
California Governor Gavin Newsom has issued an executive order instructing state agencies to develop guidelines for the increased use of artificial intelligence (AI), including risk assessment reports and ethical regulations, positioning the state as a leader in AI governance.
Alibaba's new CEO plans to prioritize artificial intelligence, user experience, and promoting a younger generation of leadership.
Eight big tech companies, including Adobe, IBM, Salesforce, and Nvidia, have pledged to conduct more testing and research on the risks of artificial intelligence (AI) in a meeting with White House officials, signaling a "bridge" to future government action on the issue. These voluntary commitments come amidst congressional scrutiny and ongoing efforts by the White House to develop policies for AI.
Tesla CEO Elon Musk suggests the need for government regulation of artificial intelligence, even proposing the creation of a Department of AI, during a gathering of tech CEOs in Washington. Senate Majority Leader Chuck Schumer and other attendees also expressed the view that government should play a role in regulating AI. The options for regulation range from a standalone department to leveraging existing agencies, but the debate is expected to continue in the coming months.
Brazil's Senate has established a work plan to discuss and analyze a bill aimed at regulating artificial intelligence (AI) in the country, with a series of public hearings and a comprehensive assessment to be completed within 120 days.
European Union President Ursula von der Leyen announced a new initiative to provide expedited access to European supercomputers for AI startups, while also calling for the establishment of a global framework for AI governance during her State of the Union address.
California Governor Gavin Newsom has signed an executive order to study the uses and risks of artificial intelligence (AI), with C3.ai CEO Thomas Siebel praising the proposal as "cogent, thoughtful, concise, productive and really extraordinarily positive public policy." Siebel believes that the order aims to understand and mitigate the risks associated with AI applications rather than impose regulation on AI companies.
Artificial intelligence (AI) requires leadership from business executives and a dedicated and diverse AI team to ensure effective implementation and governance, with roles focusing on ethics, legal, security, and training data quality becoming increasingly important.
President Biden has called for the governance of artificial intelligence to ensure it is used as a tool of opportunity and not as a weapon of oppression, emphasizing the need for international collaboration and regulation in this area.
The United States must prioritize global leadership in artificial intelligence (AI) and win the platform competition with China in order to protect national security, democracy, and economic prosperity, according to Ylli Bajraktari, the president and CEO of the Special Competitive Studies Project and former Pentagon official.
Pennsylvania Governor Josh Shapiro signed an executive order establishing standards and a governance framework for the use of artificial intelligence (AI) by state agencies, as well as creating a Generative AI Governing Board and outlining core values to govern AI use. The order aims to responsibly integrate AI into government operations and enhance employee job functions.
The Pennsylvania state government is preparing to incorporate artificial intelligence into its operations, with plans to convene an AI governing board, develop training programs, and recruit AI experts, according to Democratic Gov. Josh Shapiro.
The United Nations is considering the establishment of a new agency to govern artificial intelligence (AI) and promote international cooperation, as concerns grow about the risks and challenges associated with AI development, but some experts express doubts about the support and effectiveness of such a global initiative.
Deputy Prime Minister Oliver Dowden will warn the UN that artificial intelligence (AI) poses a threat to world order unless governments take action, with fears that the rapid pace of AI development could lead to job losses, misinformation, and discrimination without proper regulations in place. Dowden will call for global regulation and emphasize the importance of making rules in parallel with AI development rather than retroactively. Despite the need for regulation, experts note the complexity of reaching a quick international agreement, with meaningful input needed from smaller countries, marginalized communities, and ethnic minorities. The UK aims to take the lead in AI regulation, but there are concerns that without swift action, the European Union's AI Act could become the global standard instead.
The leaked information about a possible executive order by U.S. President Joe Biden on artificial intelligence is causing concern in the bitcoin and crypto industry, as it could have spillover effects on the market.
Sen. Mark Warner, a U.S. Senator from Virginia, is urging Congress to take a less ambitious approach to regulating artificial intelligence (AI), suggesting that lawmakers should focus on narrowly focused issues rather than trying to address the full spectrum of AI risks with a single comprehensive law. Warner believes that tackling immediate concerns, such as AI-generated deepfakes, is a more realistic and effective approach to regulation. He also emphasizes the need for bipartisan agreement and action to demonstrate progress in the regulation of AI, especially given Congress's previous failures in addressing issues related to social media.
Separate negotiations on artificial intelligence in Brussels and Washington highlight the tension between prioritizing short-term risks and long-term problems in AI governance.
Artificial intelligence is a top investment priority for US CEOs, with more than two-thirds ranking investment in generative AI as a primary focus for their companies, driven by the disruptive potential and promising returns on investments expected within the next few years.
Artificial intelligence (AI) will surpass human intelligence and could manipulate people, according to AI pioneer Geoffrey Hinton, who quit his role at Google to raise awareness about the risks of AI and advocate for regulations. Hinton also expressed concerns about AI's impact on the labor market and its potential militaristic uses, and called for governments to commit to not building battlefield robots. Global efforts are underway to regulate AI, with the U.K. hosting a global AI summit and the U.S. crafting an AI Bill of Rights.
A coalition of Democrats is urging President Biden to turn non-binding safeguards on artificial intelligence (AI) into policy through an executive order, using the "AI Bill of Rights" as a guide.
New York City has launched its first-ever Artificial Intelligence Action Plan, aimed at evaluating AI tools and associated risks, building AI knowledge among city government employees, and responsibly implementing AI technology in various sectors.
New York City has unveiled an AI action plan aimed at understanding and responsibly implementing the technology, with steps including the establishment of an AI Steering Committee and engagement with outside experts and the public.
European Union lawmakers have made progress in agreeing on rules for artificial intelligence, particularly on the designation of "high-risk" AI systems, bringing them closer to finalizing the landmark AI Act.
President Joe Biden will deploy federal agencies to monitor artificial intelligence risks and promote its use in various sectors while prioritizing worker protection, according to a draft executive order.
The United Nations has launched a new advisory body to address the risks of artificial intelligence and explore international cooperation in dealing with its challenges, with its recommendations potentially shaping the structure of a U.N. agency for AI governance.