Main topic: Public opinion on slowing down AI development
Key points:
1. 72 percent of American voters want to slow down AI development.
2. 82 percent of American voters don't trust AI companies to self-regulate.
3. There is strong public support for thorough AI regulation and requiring proof of AI-generated images and safety of advanced AI models.
### Summary
The article discusses the rapid advancement and potential risks of artificial intelligence (AI) and proposes the idea of nationalizing certain aspects of AI under a governing body called the Humane AI Commission to ensure AI is aligned with human interests.
### Facts
- AI is evolving rapidly and penetrating various aspects of American life, from image recognition to healthcare.
- AI has the potential to bring both significant benefits and risks to society.
- Transparency in AI is limited, and understanding how specific AI works is difficult.
- Congress is becoming more aware of the importance of AI and its need for regulation.
- The author proposes the creation of a governing body, the Humane AI Commission, that can control and steer AI technology to serve humanity's best interests.
- The nationalization of advanced AI models could be considered, similar to the Atomic Energy Commission's control over nuclear reactors.
- Various options, such as an AI pause or leaving AI development to the free market or current government agencies, have limitations in addressing the potential risks of AI.
- The author suggests that the United States should take a bold executive leadership approach to develop a national AI plan and ensure global AI leadership with a focus on benevolence and human-controlled AI.
### 🤖 AI Nationalization - The case to nationalize the “nuclear reactors” of AI — the world’s most advanced AI models — hinges on this question: Who do we want to control AI’s nuclear codes? Big Tech CEOs answering to a few billionaire shareholders, or the government of the United States, answering to its citizens?
### 👥 Humane AI Commission - The author proposes the creation of a Humane AI Commission, run by AI experts, to steer and control AI technology in alignment with human interests.
### ⚠️ Risks of AI - AI's rapid advancement and lack of transparency pose risks such as unpredictable behavior, potential damage to power generation, financial markets, and public health, and the potential for AI to move beyond human control.
### ⚖️ AI Regulation - The article calls for federal regulation of AI, but emphasizes the limitations of traditional regulation in addressing the fast-evolving nature of AI and the need for a larger-scale approach like nationalization.
AI executives may be exaggerating the dangers of artificial intelligence in order to advance their own interests, according to an analysis of responses to proposed AI regulations.
A new poll conducted by the AI Policy Institute reveals that 72 percent of American voters want to slow down the development of AI, signaling a divergence between elite opinion and public opinion on the technology. Additionally, the poll shows that 82 percent of American voters do not trust AI companies to self-regulate. To address these concerns, the AI Now Institute has proposed a framework called "Zero Trust AI Governance," which calls for lawmakers to vigorously enforce existing laws, establish bold and easily administrable rules, and place the burden of proof on companies to demonstrate the safety of their AI systems.
Congress should prioritize maintaining bipartisan commitment to AI, generating global AI guardrails, and seeking out local perspectives in order to develop effective and responsible AI policies.
AI-generated inventions need to be allowed patent protection to encourage innovation and maximize social benefits, as current laws hinder progress in biomedicine; jurisdictions around the world have differing approaches to patenting AI-generated inventions, and the US falls behind in this area, highlighting the need for legislative action.
The U.S. is falling behind in regulating artificial intelligence (AI), while Europe has passed the world's first comprehensive AI law; President Joe Biden recently met with industry leaders to discuss the need for AI regulation and companies pledged to develop safeguards for AI-generated content and prioritize user privacy.
In his book, Tom Kemp argues for the need to regulate AI and suggests measures such as AI impact assessments, AI certifications, codes of conduct, and industry standards to protect consumers and ensure AI's positive impact on society.
The increasing investment in generative AI and its disruptive impact on various industries has brought the need for regulation to the forefront, with technologists and regulators recognizing the importance of ensuring safer technological applications, but differing on the scope of regulation needed. However, it is argued that existing frameworks and standards, similar to those applied to the internet, can be adapted to regulate AI and protect consumer interests without stifling innovation.
The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
Artificial intelligence regulation varies across countries, with Brazil focusing on user rights and risk assessments, China emphasizing "true and accurate" content generation, the EU categorizing AI into three risk levels, Israel promoting responsible innovation and self-regulation, Italy allocating funds for worker support, Japan adopting a wait-and-see approach, and the UAE prioritizing AI development and integration.
A survey of 213 computer science professors suggests that a new federal agency should be created in the United States to govern artificial intelligence (AI), while the majority of respondents believe that AI will be capable of performing less than 20% of tasks currently done by humans.
A survey of 600 Floridians revealed that while many perceive advances in AI to be promising, there are significant concerns about its economic impact and implications for human security, with 75% expressing worry that AI could pose a risk to human safety and 54% fearing it could threaten their employment in the future.
The market for foundation models in artificial intelligence (AI) exhibits a tendency towards market concentration, which raises concerns about competition policy and potential monopolies, but also allows for better internalization of safety risks; regulators should adopt a two-pronged strategy to ensure contestability and regulation of producers to maintain competition and protect users.
Lawmakers in the Senate Energy Committee were warned about the threats and opportunities associated with the integration of artificial intelligence (AI) into the U.S. energy sector, with a particular emphasis on the risk posed by China's AI advancements and the need for education and regulation to mitigate negative impacts.
Tech industry lobbyists are turning their attention to state capitals in order to influence AI legislation and prevent the imposition of stricter rules across the nation, as states often act faster than Congress when it comes to tech issues; consumer advocates are concerned about the industry's dominance in shaping AI policy discussions.
Countries around the world, including Australia, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the UK, the UN, and the US, are taking various steps to regulate artificial intelligence (AI) technologies and address concerns related to privacy, security, competition, and governance.
Eight big tech companies, including Adobe, IBM, Salesforce, and Nvidia, have pledged to conduct more testing and research on the risks of artificial intelligence (AI) in a meeting with White House officials, signaling a "bridge" to future government action on the issue. These voluntary commitments come amidst congressional scrutiny and ongoing efforts by the White House to develop policies for AI.
Eight more companies, including Adobe, IBM, Palantir, Nvidia, and Salesforce, have pledged to voluntarily follow safety, security, and trust standards for artificial intelligence (AI) technology, joining the initiative led by Amazon, Google, Microsoft, and others, as concerns about the impact of AI continue to grow.
China's targeted and iterative approach to regulating artificial intelligence (AI) could provide valuable lessons for the United States, despite ideological differences, as the U.S. Congress grapples with comprehensive AI legislation covering various issues like national security, job impact, and democratic values. Learning from China's regulatory structure and process can help U.S. policymakers respond more effectively to the challenges posed by AI.
Tesla CEO Elon Musk suggests the need for government regulation of artificial intelligence, even proposing the creation of a Department of AI, during a gathering of tech CEOs in Washington. Senate Majority Leader Chuck Schumer and other attendees also expressed the view that government should play a role in regulating AI. The options for regulation range from a standalone department to leveraging existing agencies, but the debate is expected to continue in the coming months.
The nation's top tech executives, including Elon Musk, Mark Zuckerberg, and Sundar Pichai, showed support for government regulations on artificial intelligence during a closed-door meeting in the U.S. Senate, although there is little consensus on what those regulations should entail and the political path for legislation remains challenging.
A Gallup survey found that 79% of Americans have little or no trust in businesses using AI responsibly, with only 21% trusting them to some extent.
Tech leaders gathered in Washington, DC, to discuss AI regulation and endorsed the need for laws governing generative AI technology, although there was little consensus on the specifics of those regulations.
A bipartisan group of senators is expected to introduce legislation to create a government agency to regulate AI and require AI models to obtain a license before deployment, a move that some leading technology companies have supported; however, critics argue that licensing regimes and a new AI regulator could hinder innovation and concentrate power among existing players, similar to the undesirable economic consequences seen in Europe.
Governments worldwide are grappling with the challenge of regulating artificial intelligence (AI) technologies, as countries like Australia, Britain, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the United Nations, and the United States take steps to establish regulations and guidelines for AI usage.
A new poll reveals that 63% of American voters believe regulation should actively prevent the development of superintelligent AI, challenging the assumption that artificial general intelligence (AGI) should exist. The public is increasingly questioning the potential risks and costs associated with AGI, highlighting the need for democratic input and oversight in the development of transformative technologies.
China's new artificial intelligence (AI) rules, which are among the strictest in the world, have been watered down and are not being strictly enforced, potentially impacting the country's technological competition with the U.S. and influencing AI policy globally; if maximally enforced, the regulations could pose challenges for Chinese AI developers to comply with, while relaxed enforcement and regulatory leniency may still allow Chinese tech firms to remain competitive.
While many experts are concerned about the existential risks posed by AI, Mustafa Suleyman, cofounder of DeepMind, believes that the focus should be on more practical issues like regulation, privacy, bias, and online moderation. He is confident that governments can effectively regulate AI by applying successful frameworks from past technologies, although critics argue that current internet regulations are flawed and insufficiently hold big tech companies accountable. Suleyman emphasizes the importance of limiting AI's ability to improve itself and establishing clear boundaries and oversight to ensure enforceable laws. Several governments, including the European Union and China, are already working on AI regulations.
Wikipedia founder Jimmy Wales believes that regulating artificial intelligence (AI) is not feasible and compares the idea to "magical thinking," stating that many politicians lack a strong understanding of AI and its potential. While the UN is establishing a panel to investigate global regulation of AI, some experts, including physicist Reinhard Scholl, emphasize the need for regulation to prevent the misuse of AI by bad actors, while others, like Robert Opp, suggest forming a regulatory body similar to the International Civil Aviation Organisation. However, Wales argues that regulating individual developers using freely available AI software is impractical.
The U.S. government must establish regulations and enforce standards to ensure the safety and security of artificial intelligence (AI) development, including requiring developers to demonstrate the safety of their systems before deployment, according to Anthony Aguirre, the executive director and secretary of the board at the Future of Life Institute.
Sen. Mark Warner, a U.S. Senator from Virginia, is urging Congress to take a less ambitious approach to regulating artificial intelligence (AI), suggesting that lawmakers should focus on narrowly focused issues rather than trying to address the full spectrum of AI risks with a single comprehensive law. Warner believes that tackling immediate concerns, such as AI-generated deepfakes, is a more realistic and effective approach to regulation. He also emphasizes the need for bipartisan agreement and action to demonstrate progress in the regulation of AI, especially given Congress's previous failures in addressing issues related to social media.
Minnesota Democrats are calling for regulations on artificial intelligence (AI) in elections, expressing concerns about the potential for AI to deceive and manipulate voters, while also acknowledging its potential benefits for efficiency and productivity in election administration.
Regulators are targeting chipmakers like Nvidia in Europe over concerns of illegal competition practices and the potential for them to dominate AI technology's supply chain, as the importance of computing power in AI adoption becomes apparent.
AI leaders including Alphabet CEO Sundar Pichai, Microsoft president Brad Smith, and OpenAI's Sam Altman are supporting AI regulation to ensure investment security, unified rules, and a role in shaping legislation, as regulations also benefit consumers by ensuring safety, cracking down on scams and discrimination, and eliminating bias.
The US has expressed concerns that the European Union's proposed AI regulation law would benefit larger companies and hinder smaller firms, potentially leading to a migration of jobs and investment away from the EU.
China's use of artificial intelligence (AI) for surveillance and oppression should deter the United States from collaborating with China on AI development and instead focus on asserting itself in international standards-setting bodies, open sourcing AI technologies, and promoting explainable AI to ensure transparency and uphold democratic values.
A new poll shows that 77% of Americans support the federal government developing its own AI resources and staff instead of outsourcing to private consultants and big tech companies. The outsourcing approach raises concerns about conflicts of interest, high costs, and the consolidation of power among big tech giants. Policymakers have the opportunity to build public capacity by addressing the lack of AI experts in government and improving coordination between government IT teams.
The United States is considering imposing export controls on general-purpose AI programs, known as frontier models, as a way to throttle China's development of artificial intelligence and safeguard against potential risks such as disinformation and biochemical weapon creation, which could weaken AI innovation in the US and strain tensions between the two countries.
Governments have made little progress in regulating artificial intelligence despite growing concerns about its safety, while Big Tech companies have regained control over the sector and are shaping norms through their own proposed regulatory models, according to the 2023 State of AI report.
Governments around the world are considering AI regulations to address concerns such as misinformation, job loss, and the misuse of AI technologies, with different approaches taken by countries like the US, UK, EU, China, Japan, Brazil, and Israel.
AI technology has advanced rapidly, bringing both positive and negative consequences such as improved accuracy and potential risks to the economy, national security, and various industries, requiring government regulation and ethical considerations to prevent misuse and protect human values.