A new poll conducted by the AI Policy Institute reveals that 72 percent of American voters want to slow down the development of AI, signaling a divergence between elite opinion and public opinion on the technology. Additionally, the poll shows that 82 percent of American voters do not trust AI companies to self-regulate. To address these concerns, the AI Now Institute has proposed a framework called "Zero Trust AI Governance," which calls for lawmakers to vigorously enforce existing laws, establish bold and easily administrable rules, and place the burden of proof on companies to demonstrate the safety of their AI systems.
The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
Countries around the world, including Australia, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the UK, the UN, and the US, are taking various steps to regulate artificial intelligence (AI) technologies and address concerns related to privacy, security, competition, and governance.
Eight big tech companies, including Adobe, IBM, Salesforce, and Nvidia, have pledged to conduct more testing and research on the risks of artificial intelligence (AI) in a meeting with White House officials, signaling a "bridge" to future government action on the issue. These voluntary commitments come amidst congressional scrutiny and ongoing efforts by the White House to develop policies for AI.
The US Securities and Exchange Commission (SEC) is utilizing artificial intelligence (AI) technologies to monitor the financial sector for fraud and manipulation, according to SEC Chair Gary Gensler.
Governments worldwide are grappling with the challenge of regulating artificial intelligence (AI) technologies, as countries like Australia, Britain, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the United Nations, and the United States take steps to establish regulations and guidelines for AI usage.
The European Union is investigating Nvidia's alleged anticompetitive practices in the artificial intelligence chip market, where the company dominates with an 80% market share.
The European Commission has initiated preliminary inquiries into potential unfair practices related to GPUs used for AI, specifically looking into Nvidia's dominant position in the market and its pricing strategies, which may lead to a formal antitrust investigation and significant penalties for the company.
The European Commission has not initiated a formal investigation into chips used for artificial intelligence, despite the recent raid on Nvidia by the French competition authority.
Regulators are targeting chipmakers like Nvidia in Europe over concerns of illegal competition practices and the potential for them to dominate AI technology's supply chain, as the importance of computing power in AI adoption becomes apparent.
The European Commission is conducting risk assessments and considering export controls on critical technology areas, including AI and semiconductor technologies, in order to protect strategic interests and security.
Artificial intelligence (AI) has the potential to disrupt industries and requires the attention of boards of directors to consider the strategic implications, risks, compliance, and governance issues associated with its use.
The head of Germany's cartel office warns that artificial intelligence may increase the market power of Big Tech, highlighting the need for regulators to monitor anti-competitive behavior.
The EU is close to implementing the world's first laws on artificial intelligence, allowing the shutdown of harmful AI services, with negotiations on the AI Act reaching their final stages and a potential agreement expected by Wednesday. The legislation aims to establish safeguards and regulations for AI technology while addressing concerns such as real-time facial recognition and the potential for unknown threats. Companies will be held accountable for the actions of their AI tools and could face fines or bans from the EU.
European technology startups could have more success if they relocated to the United States due to greater investment and support for computing technology, according to Nigel Toon, CEO of chip designer Graphcore, who warned that without sufficient investment, the UK and Europe could be left behind in the technology race and enter a "century of humiliation." The European Union's impending regulations on artificial intelligence (AI) also pose challenges for European startups, as compliance with the rules could be burdensome and potentially push them to leave the continent.
European Union lawmakers have made progress in agreeing on rules for artificial intelligence, particularly on the designation of "high-risk" AI systems, bringing them closer to finalizing the landmark AI Act.
Lawmakers in Indiana are discussing the regulation of artificial intelligence (AI), with experts advocating for a balanced approach that fosters business growth while protecting privacy and data.