A new poll conducted by the AI Policy Institute reveals that 72 percent of American voters want to slow down the development of AI, signaling a divergence between elite opinion and public opinion on the technology. Additionally, the poll shows that 82 percent of American voters do not trust AI companies to self-regulate. To address these concerns, the AI Now Institute has proposed a framework called "Zero Trust AI Governance," which calls for lawmakers to vigorously enforce existing laws, establish bold and easily administrable rules, and place the burden of proof on companies to demonstrate the safety of their AI systems.
AI-generated inventions need to be allowed patent protection to encourage innovation and maximize social benefits, as current laws hinder progress in biomedicine; jurisdictions around the world have differing approaches to patenting AI-generated inventions, and the US falls behind in this area, highlighting the need for legislative action.
The U.S. is falling behind in regulating artificial intelligence (AI), while Europe has passed the world's first comprehensive AI law; President Joe Biden recently met with industry leaders to discuss the need for AI regulation and companies pledged to develop safeguards for AI-generated content and prioritize user privacy.
Salesforce has released an AI Acceptable Use Policy that outlines the restrictions on the use of its generative AI products, including prohibiting their use for weapons development, adult content, profiling based on protected characteristics, medical or legal advice, and more. The policy emphasizes the need for responsible innovation and sets clear ethical guidelines for the use of AI.
Artificial intelligence should be controlled by humans to prevent its weaponization and ensure safety measures are in place, according to Microsoft's president Brad Smith. He stressed the need for regulations and laws to govern AI, comparing it to other technologies that have required safety breaks and human oversight. Additionally, Smith emphasized that AI is a tool to assist humans, not to replace them, and that it can help individuals think more efficiently.
The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
The author suggests that developing safety standards for artificial intelligence (AI) is crucial, drawing upon his experience in ensuring safety measures for nuclear weapon systems and highlighting the need for a manageable group to define these standards.
Artificial intelligence regulation varies across countries, with Brazil focusing on user rights and risk assessments, China emphasizing "true and accurate" content generation, the EU categorizing AI into three risk levels, Israel promoting responsible innovation and self-regulation, Italy allocating funds for worker support, Japan adopting a wait-and-see approach, and the UAE prioritizing AI development and integration.
A survey of 213 computer science professors suggests that a new federal agency should be created in the United States to govern artificial intelligence (AI), while the majority of respondents believe that AI will be capable of performing less than 20% of tasks currently done by humans.
The lack of regulation surrounding artificial intelligence in healthcare is a significant threat, according to the World Health Organization's European regional director, who highlights the need for positive regulation to prevent harm while harnessing AI's potential.
Implementing global standards and regulations is crucial to combat the increasing threat of cyberattacks and the role of artificial intelligence in modern warfare, as governments and private companies need to collaborate and adopt cybersecurity measures to protect individuals, businesses, and nations.
Countries around the world, including Australia, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the UK, the UN, and the US, are taking various steps to regulate artificial intelligence (AI) technologies and address concerns related to privacy, security, competition, and governance.
Eight big tech companies, including Adobe, IBM, Salesforce, and Nvidia, have pledged to conduct more testing and research on the risks of artificial intelligence (AI) in a meeting with White House officials, signaling a "bridge" to future government action on the issue. These voluntary commitments come amidst congressional scrutiny and ongoing efforts by the White House to develop policies for AI.
Eight more companies, including Adobe, IBM, Palantir, Nvidia, and Salesforce, have pledged to voluntarily follow safety, security, and trust standards for artificial intelligence (AI) technology, joining the initiative led by Amazon, Google, Microsoft, and others, as concerns about the impact of AI continue to grow.
Tesla CEO Elon Musk suggests the need for government regulation of artificial intelligence, even proposing the creation of a Department of AI, during a gathering of tech CEOs in Washington. Senate Majority Leader Chuck Schumer and other attendees also expressed the view that government should play a role in regulating AI. The options for regulation range from a standalone department to leveraging existing agencies, but the debate is expected to continue in the coming months.
The nation's top tech executives, including Elon Musk, Mark Zuckerberg, and Sundar Pichai, showed support for government regulations on artificial intelligence during a closed-door meeting in the U.S. Senate, although there is little consensus on what those regulations should entail and the political path for legislation remains challenging.
The Department of Homeland Security (DHS) has released new guidelines for the use of artificial intelligence (AI), including a policy that prohibits the collection and dissemination of data used in AI activities and a requirement for thorough testing of facial recognition technologies to ensure there is no unintended bias.
Spain has established Europe's first artificial intelligence (AI) policy task force, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), to determine laws and provide a framework for the development and implementation of AI technology in the country. Many governments are uncertain about how to regulate AI, balancing its potential benefits with fears of abuse and misuse.
A bipartisan group of senators is expected to introduce legislation to create a government agency to regulate AI and require AI models to obtain a license before deployment, a move that some leading technology companies have supported; however, critics argue that licensing regimes and a new AI regulator could hinder innovation and concentrate power among existing players, similar to the undesirable economic consequences seen in Europe.
The Subcommittee on Cybersecurity, Information Technology, and Government Innovation discussed the federal government's use of artificial intelligence (AI) and emphasized the need for responsible governance, oversight, and accountability to mitigate risks and protect civil liberties and privacy rights.
Governments worldwide are grappling with the challenge of regulating artificial intelligence (AI) technologies, as countries like Australia, Britain, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the United Nations, and the United States take steps to establish regulations and guidelines for AI usage.
A new poll reveals that 63% of American voters believe regulation should actively prevent the development of superintelligent AI, challenging the assumption that artificial general intelligence (AGI) should exist. The public is increasingly questioning the potential risks and costs associated with AGI, highlighting the need for democratic input and oversight in the development of transformative technologies.
China's new artificial intelligence (AI) rules, which are among the strictest in the world, have been watered down and are not being strictly enforced, potentially impacting the country's technological competition with the U.S. and influencing AI policy globally; if maximally enforced, the regulations could pose challenges for Chinese AI developers to comply with, while relaxed enforcement and regulatory leniency may still allow Chinese tech firms to remain competitive.
President Joe Biden addressed the United Nations General Assembly, expressing the need to harness the power of artificial intelligence for good while safeguarding citizens from its potential risks, as U.S. policymakers explore the proper regulations and guardrails for AI technology.
While many experts are concerned about the existential risks posed by AI, Mustafa Suleyman, cofounder of DeepMind, believes that the focus should be on more practical issues like regulation, privacy, bias, and online moderation. He is confident that governments can effectively regulate AI by applying successful frameworks from past technologies, although critics argue that current internet regulations are flawed and insufficiently hold big tech companies accountable. Suleyman emphasizes the importance of limiting AI's ability to improve itself and establishing clear boundaries and oversight to ensure enforceable laws. Several governments, including the European Union and China, are already working on AI regulations.
Deputy Prime Minister Oliver Dowden will warn the UN that artificial intelligence (AI) poses a threat to world order unless governments take action, with fears that the rapid pace of AI development could lead to job losses, misinformation, and discrimination without proper regulations in place. Dowden will call for global regulation and emphasize the importance of making rules in parallel with AI development rather than retroactively. Despite the need for regulation, experts note the complexity of reaching a quick international agreement, with meaningful input needed from smaller countries, marginalized communities, and ethnic minorities. The UK aims to take the lead in AI regulation, but there are concerns that without swift action, the European Union's AI Act could become the global standard instead.
Americans want upfront regulation for AI, but they don't trust the government to build those guardrails, with 62% of voters preferring the tech industry to spearhead AI regulation, according to a recent poll, as they want AI companies to keep themselves in check while not being held back by out-of-touch lawmakers.
Coinbase CEO Brian Armstrong believes that the United States should not regulate the development of artificial intelligence (AI) in order to avoid hindering progress and innovation in the same way that regulations have affected the crypto industry.
The EU's Artificial Intelligence Act must establish a clear link between artificial intelligence and the rule of law to safeguard human rights and regulate the use of AI without undermining protections, according to advocates.
AI leaders including Alphabet CEO Sundar Pichai, Microsoft president Brad Smith, and OpenAI's Sam Altman are supporting AI regulation to ensure investment security, unified rules, and a role in shaping legislation, as regulations also benefit consumers by ensuring safety, cracking down on scams and discrimination, and eliminating bias.
China's use of artificial intelligence (AI) for surveillance and oppression should deter the United States from collaborating with China on AI development and instead focus on asserting itself in international standards-setting bodies, open sourcing AI technologies, and promoting explainable AI to ensure transparency and uphold democratic values.
Ukraine's Ministry of Digital Transformation has unveiled a regulatory roadmap for artificial intelligence (AI), aiming to help local companies prepare for adopting a law similar to the EU's AI Act and educate citizens on protecting themselves from AI risks. The roadmap follows a bottom-up approach, providing tools for businesses to prepare for future requirements before implementing any laws.
Regulators should actively learn about algorithmic systems and evaluate them for compliance under existing statutory authority, utilizing tools such as transparency requirements, algorithmic investigations and audits, regulatory AI sandboxes, leveraging the AI assurance industry, and welcoming information from complaints and whistleblowers.
Companies are increasingly creating the role of chief AI officer to advocate for safe and effective AI practices, with responsibilities including understanding and applying AI technologies, ensuring safety and ethical considerations, and delivering quantifiable results.
Leaders of the G7 are expected to establish international AI regulations by the end of the year, as part of the Hiroshima AI Process, in order to ensure safe and trustworthy generative AI systems and drive further economic growth and improvement of living conditions, said Japanese prime minister Fumia Kishida at the UN-sponsored Internet Governance Forum.
Democratic lawmakers have urged President Biden to turn non-binding safeguards on artificial intelligence (AI) into policy through an executive order, using the AI Bill of Rights as a guide to set in place comprehensive AI policy across the federal government.
Governments around the world are considering AI regulations to address concerns such as misinformation, job loss, and the misuse of AI technologies, with different approaches taken by countries like the US, UK, EU, China, Japan, Brazil, and Israel.
AI technology has advanced rapidly, bringing both positive and negative consequences such as improved accuracy and potential risks to the economy, national security, and various industries, requiring government regulation and ethical considerations to prevent misuse and protect human values.
Policymakers can use the IEEE 1012 Standard for System, Software, and Hardware Verification and Validation as a roadmap to regulate AI systems and manage the risks associated with their use, providing guidance on assigning actions based on the severity and likelihood of consequences.
The World Health Organization (WHO) has released guidelines for regulating artificial intelligence (AI) in healthcare, emphasizing the importance of safety, effectiveness, and stakeholder dialogue, while addressing issues such as bias, privacy, and data protection.
DeepMind released a paper proposing a framework for evaluating the societal and ethical risks of AI systems ahead of the AI Safety Summit, addressing the need for transparency and examination of AI systems at the "point of human interaction" and the ways in which these systems might be used and embedded in society.