1. Home
  2. >
  3. AI 🤖
Posted

Tech Giants Back AI Licensing as Congress Debates New Regulations

  • Technology companies like Microsoft support a license for AI models before deployment.

  • Congress is considering bills to create a new agency to regulate AI across sectors.

  • Licensing regimes favor large companies and can limit innovation from new players.

  • Europe's heavy regulatory approach to the internet and AI has hampered innovation.

  • Existing laws could address AI concerns without new burdensome requirements.

reason.com
Relevant topic timeline:
- Capitol Hill is not known for being tech-savvy, but during a recent Senate hearing on AI regulation, legislators showed surprising knowledge and understanding of the topic. - Senator Richard Blumenthal asked about setting safety breaks on AutoGPT, an AI agent that can carry out complex tasks, to ensure its responsible use. - Senator Josh Hawley raised concerns about the working conditions of Kenyan workers involved in building safety filters for OpenAI's models. - The hearing featured testimonies from Dario Amodei, CEO of Anthropic, Stuart Russell, a computer science professor, and Yoshua Bengio, a professor at Université de Montréal. - This indicates a growing awareness and interest among lawmakers in understanding and regulating AI technology.
### Summary The article discusses the rapid advancement and potential risks of artificial intelligence (AI) and proposes the idea of nationalizing certain aspects of AI under a governing body called the Humane AI Commission to ensure AI is aligned with human interests. ### Facts - AI is evolving rapidly and penetrating various aspects of American life, from image recognition to healthcare. - AI has the potential to bring both significant benefits and risks to society. - Transparency in AI is limited, and understanding how specific AI works is difficult. - Congress is becoming more aware of the importance of AI and its need for regulation. - The author proposes the creation of a governing body, the Humane AI Commission, that can control and steer AI technology to serve humanity's best interests. - The nationalization of advanced AI models could be considered, similar to the Atomic Energy Commission's control over nuclear reactors. - Various options, such as an AI pause or leaving AI development to the free market or current government agencies, have limitations in addressing the potential risks of AI. - The author suggests that the United States should take a bold executive leadership approach to develop a national AI plan and ensure global AI leadership with a focus on benevolence and human-controlled AI. ### 🤖 AI Nationalization - The case to nationalize the “nuclear reactors” of AI — the world’s most advanced AI models — hinges on this question: Who do we want to control AI’s nuclear codes? Big Tech CEOs answering to a few billionaire shareholders, or the government of the United States, answering to its citizens? ### 👥 Humane AI Commission - The author proposes the creation of a Humane AI Commission, run by AI experts, to steer and control AI technology in alignment with human interests. ### ⚠️ Risks of AI - AI's rapid advancement and lack of transparency pose risks such as unpredictable behavior, potential damage to power generation, financial markets, and public health, and the potential for AI to move beyond human control. ### ⚖️ AI Regulation - The article calls for federal regulation of AI, but emphasizes the limitations of traditional regulation in addressing the fast-evolving nature of AI and the need for a larger-scale approach like nationalization.
### Summary Artificial intelligence (AI) is a transformative technology that will reshape politics, economies, and societies, but it also poses significant challenges and risks. To effectively govern AI, policymakers should adopt a new governance framework that is precautionary, agile, inclusive, impermeable, and targeted. This framework should be built upon common principles and encompass three overlapping governance regimes: one for establishing facts and advising governments, one for preventing AI arms races, and one for managing disruptive forces. Additionally, global AI governance must move past traditional conceptions of sovereignty and invite technology companies to participate in rule-making processes. ### Facts - **AI Progression**: AI systems have been evolving rapidly and possess the potential to self-improve and achieve quasi-autonomy. Models with trillions of parameters and brain-scale models could be viable within a few years. - **Dual Use**: AI is dual-use, meaning it has both military and civilian applications. The boundaries between the two are blurred, and AI can be used to create and spread misinformation, conduct surveillance, and produce powerful weapons. - **Accessible and Proliferation Risks**: AI has become increasingly accessible and proliferated, making regulatory efforts challenging. The ease of copying AI algorithms and models poses proliferation risks, as well as the potential for misuse and unintended consequences. - **Shift in Global Power**: AI's advancement and geopolitical competition in AI supremacy are shifting the structure and balance of global power. Technology companies are becoming powerful actors in the digital realm, challenging the authority of nation-states. - **Inadequate Governance**: Current regulatory efforts are insufficient to govern AI effectively. There is a need for a new governance framework that is agile, inclusive, and targeted to address the unique challenges posed by AI. - **Principles for AI Governance**: Precaution, agility, inclusivity, impermeability, and targeting are key principles for AI governance. These principles should guide the development of granular regulatory frameworks. - **Three Overlapping Governance Regimes**: Policy frameworks should include a regime for fact-finding, advising governments on AI risks; a regime for preventing AI arms races through international cooperation and monitoring; and a regime for managing disruptive forces and crises related to AI. ### Emoji :robot:
A new poll conducted by the AI Policy Institute reveals that 72 percent of American voters want to slow down the development of AI, signaling a divergence between elite opinion and public opinion on the technology. Additionally, the poll shows that 82 percent of American voters do not trust AI companies to self-regulate. To address these concerns, the AI Now Institute has proposed a framework called "Zero Trust AI Governance," which calls for lawmakers to vigorously enforce existing laws, establish bold and easily administrable rules, and place the burden of proof on companies to demonstrate the safety of their AI systems.
AI-generated inventions need to be allowed patent protection to encourage innovation and maximize social benefits, as current laws hinder progress in biomedicine; jurisdictions around the world have differing approaches to patenting AI-generated inventions, and the US falls behind in this area, highlighting the need for legislative action.
The U.S. is falling behind in regulating artificial intelligence (AI), while Europe has passed the world's first comprehensive AI law; President Joe Biden recently met with industry leaders to discuss the need for AI regulation and companies pledged to develop safeguards for AI-generated content and prioritize user privacy.
In his book, Tom Kemp argues for the need to regulate AI and suggests measures such as AI impact assessments, AI certifications, codes of conduct, and industry standards to protect consumers and ensure AI's positive impact on society.
The increasing investment in generative AI and its disruptive impact on various industries has brought the need for regulation to the forefront, with technologists and regulators recognizing the importance of ensuring safer technological applications, but differing on the scope of regulation needed. However, it is argued that existing frameworks and standards, similar to those applied to the internet, can be adapted to regulate AI and protect consumer interests without stifling innovation.
The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
UK's plan to lead in AI regulation is at risk of being overtaken by the EU unless a new law is introduced in November, warns the Commons Technology Committee, highlighting the need for legislation to avoid being left behind.
The rapid advancement of AI technology poses significant challenges for democratic societies, including the need for nuanced debates, public engagement, and ethical considerations in regulating AI to mitigate unintended consequences.
A survey of 213 computer science professors suggests that a new federal agency should be created in the United States to govern artificial intelligence (AI), while the majority of respondents believe that AI will be capable of performing less than 20% of tasks currently done by humans.
Two senators, Richard Blumenthal and Josh Hawley, have released a bipartisan framework for AI legislation that includes requiring AI companies to apply for licensing and clarifying that a tech liability shield would not protect these companies from lawsuits.
Countries around the world, including Australia, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the UK, the UN, and the US, are taking various steps to regulate artificial intelligence (AI) technologies and address concerns related to privacy, security, competition, and governance.
AI has the potential to fundamentally change governments and society, with AI-powered companies and individuals usurping traditional institutions and creating a new world order, warns economist Samuel Hammond. Traditional governments may struggle to regulate AI and keep pace with its advancements, potentially leading to a loss of global power for these governments.
California Senator Scott Wiener is introducing a bill to regulate artificial intelligence (AI) in the state, aiming to establish transparency requirements, legal liability, and security measures for advanced AI systems. The bill also proposes setting up a state research cloud called "CalCompute" to support AI development outside of big industry.
Tesla CEO Elon Musk suggests the need for government regulation of artificial intelligence, even proposing the creation of a Department of AI, during a gathering of tech CEOs in Washington. Senate Majority Leader Chuck Schumer and other attendees also expressed the view that government should play a role in regulating AI. The options for regulation range from a standalone department to leveraging existing agencies, but the debate is expected to continue in the coming months.
The AI industry should learn from the regulatory challenges faced by the crypto industry and take a proactive approach in building relationships with lawmakers, highlighting the benefits of AI technology, and winning public support through campaigns in key congressional districts and states.
Eight new technology companies, including Adobe, IBM, Nvidia, Palantir, and Salesforce, have made voluntary commitments on artificial intelligence (AI) to drive safe and secure development while working towards comprehensive regulation, according to a senior Biden administration official. The commitments include outside testing of AI systems, cybersecurity measures, information sharing, research on societal risks, and addressing society's challenges. The White House is partnering with the private sector to harness the benefits of AI while managing the risks.
Governments worldwide are grappling with the challenge of regulating artificial intelligence (AI) technologies, as countries like Australia, Britain, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the United Nations, and the United States take steps to establish regulations and guidelines for AI usage.
A new poll reveals that 63% of American voters believe regulation should actively prevent the development of superintelligent AI, challenging the assumption that artificial general intelligence (AGI) should exist. The public is increasingly questioning the potential risks and costs associated with AGI, highlighting the need for democratic input and oversight in the development of transformative technologies.
While many experts are concerned about the existential risks posed by AI, Mustafa Suleyman, cofounder of DeepMind, believes that the focus should be on more practical issues like regulation, privacy, bias, and online moderation. He is confident that governments can effectively regulate AI by applying successful frameworks from past technologies, although critics argue that current internet regulations are flawed and insufficiently hold big tech companies accountable. Suleyman emphasizes the importance of limiting AI's ability to improve itself and establishing clear boundaries and oversight to ensure enforceable laws. Several governments, including the European Union and China, are already working on AI regulations.
Wikipedia founder Jimmy Wales believes that regulating artificial intelligence (AI) is not feasible and compares the idea to "magical thinking," stating that many politicians lack a strong understanding of AI and its potential. While the UN is establishing a panel to investigate global regulation of AI, some experts, including physicist Reinhard Scholl, emphasize the need for regulation to prevent the misuse of AI by bad actors, while others, like Robert Opp, suggest forming a regulatory body similar to the International Civil Aviation Organisation. However, Wales argues that regulating individual developers using freely available AI software is impractical.
The United Nations is considering the establishment of a new agency to govern artificial intelligence (AI) and promote international cooperation, as concerns grow about the risks and challenges associated with AI development, but some experts express doubts about the support and effectiveness of such a global initiative.
The U.S. government must establish regulations and enforce standards to ensure the safety and security of artificial intelligence (AI) development, including requiring developers to demonstrate the safety of their systems before deployment, according to Anthony Aguirre, the executive director and secretary of the board at the Future of Life Institute.
Deputy Prime Minister Oliver Dowden will warn the UN that artificial intelligence (AI) poses a threat to world order unless governments take action, with fears that the rapid pace of AI development could lead to job losses, misinformation, and discrimination without proper regulations in place. Dowden will call for global regulation and emphasize the importance of making rules in parallel with AI development rather than retroactively. Despite the need for regulation, experts note the complexity of reaching a quick international agreement, with meaningful input needed from smaller countries, marginalized communities, and ethnic minorities. The UK aims to take the lead in AI regulation, but there are concerns that without swift action, the European Union's AI Act could become the global standard instead.
The United Nations aims to bring inclusiveness, legitimacy, and authority to the regulation of artificial intelligence, leveraging its experience with managing the impact of various technologies and creating compliance pressure for commitments made by governments, according to Amandeep Gill, the organization's top tech-policy official. Despite the challenges of building consensus and engaging stakeholders, the U.N. seeks to promote diverse and inclusive innovation to ensure equal opportunities and prevent concentration of economic power. Gill also emphasizes the potential of AI in accelerating progress towards the Sustainable Development Goals but expresses concerns about potential misuse and concentration of power.
Sen. Mark Warner, a U.S. Senator from Virginia, is urging Congress to take a less ambitious approach to regulating artificial intelligence (AI), suggesting that lawmakers should focus on narrowly focused issues rather than trying to address the full spectrum of AI risks with a single comprehensive law. Warner believes that tackling immediate concerns, such as AI-generated deepfakes, is a more realistic and effective approach to regulation. He also emphasizes the need for bipartisan agreement and action to demonstrate progress in the regulation of AI, especially given Congress's previous failures in addressing issues related to social media.
Minnesota Democrats are calling for regulations on artificial intelligence (AI) in elections, expressing concerns about the potential for AI to deceive and manipulate voters, while also acknowledging its potential benefits for efficiency and productivity in election administration.
Sens. Richard Blumenthal and Hawley's bipartisan AI framework, intended to protect children and promote transparency, may stifle AI innovation by regulating development rather than use, potentially infringing upon First Amendment rights and hindering the advancement of beneficial AI technologies.
Regulators are targeting chipmakers like Nvidia in Europe over concerns of illegal competition practices and the potential for them to dominate AI technology's supply chain, as the importance of computing power in AI adoption becomes apparent.
Lawmakers must adopt a nuanced understanding of AI and consider the real-world implications and consequences instead of relying on extreme speculations and the influence of corporate voices.
AI leaders including Alphabet CEO Sundar Pichai, Microsoft president Brad Smith, and OpenAI's Sam Altman are supporting AI regulation to ensure investment security, unified rules, and a role in shaping legislation, as regulations also benefit consumers by ensuring safety, cracking down on scams and discrimination, and eliminating bias.
Artificial intelligence (AI) has the potential to disrupt industries and requires the attention of boards of directors to consider the strategic implications, risks, compliance, and governance issues associated with its use.
Ukraine's Ministry of Digital Transformation has unveiled a regulatory roadmap for artificial intelligence (AI), aiming to help local companies prepare for adopting a law similar to the EU's AI Act and educate citizens on protecting themselves from AI risks. The roadmap follows a bottom-up approach, providing tools for businesses to prepare for future requirements before implementing any laws.
The head of Germany's cartel office warns that artificial intelligence may increase the market power of Big Tech, highlighting the need for regulators to monitor anti-competitive behavior.
A new poll shows that 77% of Americans support the federal government developing its own AI resources and staff instead of outsourcing to private consultants and big tech companies. The outsourcing approach raises concerns about conflicts of interest, high costs, and the consolidation of power among big tech giants. Policymakers have the opportunity to build public capacity by addressing the lack of AI experts in government and improving coordination between government IT teams.
Democratic lawmakers have urged President Biden to turn non-binding safeguards on artificial intelligence (AI) into policy through an executive order, using the AI Bill of Rights as a guide to set in place comprehensive AI policy across the federal government.
Governments have made little progress in regulating artificial intelligence despite growing concerns about its safety, while Big Tech companies have regained control over the sector and are shaping norms through their own proposed regulatory models, according to the 2023 State of AI report.
Governments around the world are considering AI regulations to address concerns such as misinformation, job loss, and the misuse of AI technologies, with different approaches taken by countries like the US, UK, EU, China, Japan, Brazil, and Israel.
AI technology has advanced rapidly, bringing both positive and negative consequences such as improved accuracy and potential risks to the economy, national security, and various industries, requiring government regulation and ethical considerations to prevent misuse and protect human values.