Lawmakers in the Senate Energy Committee were warned about the threats and opportunities associated with the integration of artificial intelligence (AI) into the U.S. energy sector, with a particular emphasis on the risk posed by China's AI advancements and the need for education and regulation to mitigate negative impacts.
Eight big tech companies, including Adobe, IBM, Salesforce, and Nvidia, have pledged to conduct more testing and research on the risks of artificial intelligence (AI) in a meeting with White House officials, signaling a "bridge" to future government action on the issue. These voluntary commitments come amidst congressional scrutiny and ongoing efforts by the White House to develop policies for AI.
California Senator Scott Wiener is introducing a bill to regulate artificial intelligence (AI) in the state, aiming to establish transparency requirements, legal liability, and security measures for advanced AI systems. The bill also proposes setting up a state research cloud called "CalCompute" to support AI development outside of big industry.
California Governor Gavin Newsom plans to sign a bill that would require large corporations to disclose their climate-related financial risks and their scope 1, 2, and 3 emissions, leading the way in corporate transparency regarding climate risk.
California Governor Gavin Newsom announces his intention to sign a bill that will make California the first location in the U.S. to require large businesses to publicly declare their carbon emissions, including their supply chain emissions.
President Joe Biden addressed the United Nations General Assembly, expressing the need to harness the power of artificial intelligence for good while safeguarding citizens from its potential risks, as U.S. policymakers explore the proper regulations and guardrails for AI technology.
Pennsylvania Governor Josh Shapiro signed an executive order establishing standards and a governance framework for the use of artificial intelligence (AI) by state agencies, as well as creating a Generative AI Governing Board and outlining core values to govern AI use. The order aims to responsibly integrate AI into government operations and enhance employee job functions.
The U.S. government must establish regulations and enforce standards to ensure the safety and security of artificial intelligence (AI) development, including requiring developers to demonstrate the safety of their systems before deployment, according to Anthony Aguirre, the executive director and secretary of the board at the Future of Life Institute.
Israeli Prime Minister Benjamin Netanyahu warned of the potential dangers of artificial intelligence (AI) and called for responsible and ethical development of AI during his speech at the United Nations General Assembly, emphasizing that nations must work together to prevent the perils of AI and ensure it brings more freedom and benefits humanity.
The White House plans to introduce an executive order on artificial intelligence in the coming weeks, as President Biden aims for responsible AI innovation and collaboration with international partners.
Sen. Mark Warner, a U.S. Senator from Virginia, is urging Congress to take a less ambitious approach to regulating artificial intelligence (AI), suggesting that lawmakers should focus on narrowly focused issues rather than trying to address the full spectrum of AI risks with a single comprehensive law. Warner believes that tackling immediate concerns, such as AI-generated deepfakes, is a more realistic and effective approach to regulation. He also emphasizes the need for bipartisan agreement and action to demonstrate progress in the regulation of AI, especially given Congress's previous failures in addressing issues related to social media.
Californians are anticipating Governor Gavin Newsom's approval of two climate-focused bills that would compel large companies to disclose their greenhouse gas emissions and climate-related financial risks, potentially setting a trend for other states to follow suit.