AI executives may be exaggerating the dangers of artificial intelligence in order to advance their own interests, according to an analysis of responses to proposed AI regulations.
A new poll conducted by the AI Policy Institute reveals that 72 percent of American voters want to slow down the development of AI, signaling a divergence between elite opinion and public opinion on the technology. Additionally, the poll shows that 82 percent of American voters do not trust AI companies to self-regulate. To address these concerns, the AI Now Institute has proposed a framework called "Zero Trust AI Governance," which calls for lawmakers to vigorously enforce existing laws, establish bold and easily administrable rules, and place the burden of proof on companies to demonstrate the safety of their AI systems.
The rapid development of AI technology, exemplified by OpenAI's ChatGPT, has raised concerns about the potential societal impacts and ethical implications, highlighting the need for responsible AI development and regulation to mitigate these risks.
The U.S. is falling behind in regulating artificial intelligence (AI), while Europe has passed the world's first comprehensive AI law; President Joe Biden recently met with industry leaders to discuss the need for AI regulation and companies pledged to develop safeguards for AI-generated content and prioritize user privacy.
Princeton University professor Arvind Narayanan and his Ph.D. student Sayash Kapoor, authors of "AI Snake Oil," discuss the evolution of AI and the need for responsible practices in the gen AI era, emphasizing the power of collective action and usage transparency.
Microsoft's report on governing AI in India provides five policy suggestions while emphasizing the importance of ethical AI, human control over AI systems, and the need for multilateral frameworks to ensure responsible AI development and deployment worldwide.
The deployment of generation AI (gen AI) capabilities in enterprises comes with compliance risks and potential legal liabilities, particularly related to data privacy laws and copyright infringement, prompting companies to take a cautious approach and deploy gen AI in low-risk areas. Strategies such as prioritizing lower-risk use cases, implementing data governance measures, utilizing layers of control, considering open-source software, addressing data residency requirements, seeking indemnification from vendors, and giving board-level attention to AI are being employed to mitigate risks and navigate regulatory uncertainty.
Salesforce has released an AI Acceptable Use Policy that outlines the restrictions on the use of its generative AI products, including prohibiting their use for weapons development, adult content, profiling based on protected characteristics, medical or legal advice, and more. The policy emphasizes the need for responsible innovation and sets clear ethical guidelines for the use of AI.
The GZERO World podcast episode discusses the explosive growth and potential risks of generative AI, as well as the proposed 5 principles for effective AI governance.
Microsoft President Brad Smith advocates for the need of national and international regulations for Artificial Intelligence (AI), emphasizing the importance of safeguards and laws to keep pace with the rapid advancement of AI technology. He believes that AI can bring significant benefits to India and the world, but also emphasizes the responsibility that comes with it. Smith praises India's data protection legislation and digital public infrastructure, stating that India has become one of the most important countries for Microsoft. He also highlights the necessity of global guardrails on AI and the need to prioritize safety and building safeguards.
The increasing investment in generative AI and its disruptive impact on various industries has brought the need for regulation to the forefront, with technologists and regulators recognizing the importance of ensuring safer technological applications, but differing on the scope of regulation needed. However, it is argued that existing frameworks and standards, similar to those applied to the internet, can be adapted to regulate AI and protect consumer interests without stifling innovation.
Artificial intelligence should be controlled by humans to prevent its weaponization and ensure safety measures are in place, according to Microsoft's president Brad Smith. He stressed the need for regulations and laws to govern AI, comparing it to other technologies that have required safety breaks and human oversight. Additionally, Smith emphasized that AI is a tool to assist humans, not to replace them, and that it can help individuals think more efficiently.
Artificial intelligence (AI) tools can put human rights at risk, as highlighted by researchers from Amnesty International on the Me, Myself, and AI podcast, who discuss scenarios in which AI is used to track activists and make automated decisions that can lead to discrimination and inequality, emphasizing the need for human intervention and changes in public policy to address these issues.
The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
The authors propose a framework for assessing the potential harm caused by AI systems in order to address concerns about "Killer AI" and ensure responsible integration into society.
Companies are increasingly exploring the use of artificial intelligence (AI) in various areas such as sales/marketing, product development, and legal, but boards and board committees often lack explicit responsibility for AI oversight, according to a survey of members of the Society for Corporate Governance.
Artificial intelligence regulation varies across countries, with Brazil focusing on user rights and risk assessments, China emphasizing "true and accurate" content generation, the EU categorizing AI into three risk levels, Israel promoting responsible innovation and self-regulation, Italy allocating funds for worker support, Japan adopting a wait-and-see approach, and the UAE prioritizing AI development and integration.
The use of AI in the entertainment industry, such as body scans and generative AI systems, raises concerns about workers' rights, intellectual property, and the potential for broader use of AI in other industries, infringing on human connection and privacy.
The rapid advancement of AI technology poses significant challenges for democratic societies, including the need for nuanced debates, public engagement, and ethical considerations in regulating AI to mitigate unintended consequences.
A survey of 213 computer science professors suggests that a new federal agency should be created in the United States to govern artificial intelligence (AI), while the majority of respondents believe that AI will be capable of performing less than 20% of tasks currently done by humans.
Mustafa Suleyman, CEO of Inflection AI, argues that restricting the sale of AI technologies and appointing a cabinet-level regulator are necessary steps to combat the negative effects of artificial intelligence and prevent misuse.
Two senators, Richard Blumenthal and Josh Hawley, have released a bipartisan framework for AI legislation that includes requiring AI companies to apply for licensing and clarifying that a tech liability shield would not protect these companies from lawsuits.
Concerns about artificial intelligence and democracy are assessed, with fears over AI's potential to undermine democracy explored, including the threat posed by Chinese misinformation campaigns and the call for AI regulation by Senator Josh Hawley.
The entrepreneur Mustafa Suleyman calls for urgent regulation and containment of artificial intelligence in his new book, emphasizing the need to tap into its opportunities while mitigating its risks.
Countries around the world, including Australia, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the UK, the UN, and the US, are taking various steps to regulate artificial intelligence (AI) technologies and address concerns related to privacy, security, competition, and governance.
AI has the potential to fundamentally change governments and society, with AI-powered companies and individuals usurping traditional institutions and creating a new world order, warns economist Samuel Hammond. Traditional governments may struggle to regulate AI and keep pace with its advancements, potentially leading to a loss of global power for these governments.
China's targeted and iterative approach to regulating artificial intelligence (AI) could provide valuable lessons for the United States, despite ideological differences, as the U.S. Congress grapples with comprehensive AI legislation covering various issues like national security, job impact, and democratic values. Learning from China's regulatory structure and process can help U.S. policymakers respond more effectively to the challenges posed by AI.
California Senator Scott Wiener is introducing a bill to regulate artificial intelligence (AI) in the state, aiming to establish transparency requirements, legal liability, and security measures for advanced AI systems. The bill also proposes setting up a state research cloud called "CalCompute" to support AI development outside of big industry.
Tesla CEO Elon Musk suggests the need for government regulation of artificial intelligence, even proposing the creation of a Department of AI, during a gathering of tech CEOs in Washington. Senate Majority Leader Chuck Schumer and other attendees also expressed the view that government should play a role in regulating AI. The options for regulation range from a standalone department to leveraging existing agencies, but the debate is expected to continue in the coming months.
The UK government is showing increased concern about the potential risks of artificial intelligence (AI) and the influence of the "Effective Altruism" (EA) movement, which warns of the existential dangers of super-intelligent AI and advocates for long-term policy planning; critics argue that the focus on future risks distracts from the real ethical challenges of AI in the present and raises concerns of regulatory capture by vested interests.
Spain has established Europe's first artificial intelligence (AI) policy task force, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), to determine laws and provide a framework for the development and implementation of AI technology in the country. Many governments are uncertain about how to regulate AI, balancing its potential benefits with fears of abuse and misuse.
The AI industry should learn from the regulatory challenges faced by the crypto industry and take a proactive approach in building relationships with lawmakers, highlighting the benefits of AI technology, and winning public support through campaigns in key congressional districts and states.
Eight new technology companies, including Adobe, IBM, Nvidia, Palantir, and Salesforce, have made voluntary commitments on artificial intelligence (AI) to drive safe and secure development while working towards comprehensive regulation, according to a senior Biden administration official. The commitments include outside testing of AI systems, cybersecurity measures, information sharing, research on societal risks, and addressing society's challenges. The White House is partnering with the private sector to harness the benefits of AI while managing the risks.
The book "The Age of AI: And Our Human Future" by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher explores the transformational impact of AI on human society and the need for humans to shape its development and use with their values.
A bipartisan group of senators is expected to introduce legislation to create a government agency to regulate AI and require AI models to obtain a license before deployment, a move that some leading technology companies have supported; however, critics argue that licensing regimes and a new AI regulator could hinder innovation and concentrate power among existing players, similar to the undesirable economic consequences seen in Europe.
The Subcommittee on Cybersecurity, Information Technology, and Government Innovation discussed the federal government's use of artificial intelligence (AI) and emphasized the need for responsible governance, oversight, and accountability to mitigate risks and protect civil liberties and privacy rights.
The geography of AI, particularly the distribution of compute power and data centers, is becoming increasingly important in global economic and geopolitical competition, raising concerns about issues such as data privacy, national security, and the dominance of tech giants like Amazon. Policy interventions and accountability for AI models are being urged to address the potential harms and issues associated with rapid technological advancements. The UK's Competition and Markets Authority has also warned about the risks of industry consolidation and the potential harm to consumers if a few firms gain market power in the AI sector.
Governments worldwide are grappling with the challenge of regulating artificial intelligence (AI) technologies, as countries like Australia, Britain, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the United Nations, and the United States take steps to establish regulations and guidelines for AI usage.
A new poll reveals that 63% of American voters believe regulation should actively prevent the development of superintelligent AI, challenging the assumption that artificial general intelligence (AGI) should exist. The public is increasingly questioning the potential risks and costs associated with AGI, highlighting the need for democratic input and oversight in the development of transformative technologies.
China's new artificial intelligence (AI) rules, which are among the strictest in the world, have been watered down and are not being strictly enforced, potentially impacting the country's technological competition with the U.S. and influencing AI policy globally; if maximally enforced, the regulations could pose challenges for Chinese AI developers to comply with, while relaxed enforcement and regulatory leniency may still allow Chinese tech firms to remain competitive.
While many experts are concerned about the existential risks posed by AI, Mustafa Suleyman, cofounder of DeepMind, believes that the focus should be on more practical issues like regulation, privacy, bias, and online moderation. He is confident that governments can effectively regulate AI by applying successful frameworks from past technologies, although critics argue that current internet regulations are flawed and insufficiently hold big tech companies accountable. Suleyman emphasizes the importance of limiting AI's ability to improve itself and establishing clear boundaries and oversight to ensure enforceable laws. Several governments, including the European Union and China, are already working on AI regulations.
Pennsylvania Governor Josh Shapiro signed an executive order establishing standards and a governance framework for the use of artificial intelligence (AI) by state agencies, as well as creating a Generative AI Governing Board and outlining core values to govern AI use. The order aims to responsibly integrate AI into government operations and enhance employee job functions.
Wikipedia founder Jimmy Wales believes that regulating artificial intelligence (AI) is not feasible and compares the idea to "magical thinking," stating that many politicians lack a strong understanding of AI and its potential. While the UN is establishing a panel to investigate global regulation of AI, some experts, including physicist Reinhard Scholl, emphasize the need for regulation to prevent the misuse of AI by bad actors, while others, like Robert Opp, suggest forming a regulatory body similar to the International Civil Aviation Organisation. However, Wales argues that regulating individual developers using freely available AI software is impractical.
AI adoption is rapidly increasing, but it is crucial for businesses to establish governance and ethical usage policies to prevent potential harm and job loss, while utilizing AI to automate tasks, augment human work, enable change management, make data-driven decisions, prioritize employee training, and establish responsible AI governance.
The U.S. government must establish regulations and enforce standards to ensure the safety and security of artificial intelligence (AI) development, including requiring developers to demonstrate the safety of their systems before deployment, according to Anthony Aguirre, the executive director and secretary of the board at the Future of Life Institute.