1. Home
  2. >
  3. AI 🤖
Posted

Pennsylvania establishes new standards to govern ethical AI use by state agencies

  • Establishes standards and governance for AI use by PA state agencies
  • Outlines values and principles for state employee AI use
  • Partners with Carnegie Mellon for AI research and expert advisory support
  • Creates Generative AI Governing Board to oversee ethical AI deployment
  • Details previous executive orders on business, jobs, aging services, licensing
wgal.com
Relevant topic timeline:
### Summary The article discusses the rapid advancement and potential risks of artificial intelligence (AI) and proposes the idea of nationalizing certain aspects of AI under a governing body called the Humane AI Commission to ensure AI is aligned with human interests. ### Facts - AI is evolving rapidly and penetrating various aspects of American life, from image recognition to healthcare. - AI has the potential to bring both significant benefits and risks to society. - Transparency in AI is limited, and understanding how specific AI works is difficult. - Congress is becoming more aware of the importance of AI and its need for regulation. - The author proposes the creation of a governing body, the Humane AI Commission, that can control and steer AI technology to serve humanity's best interests. - The nationalization of advanced AI models could be considered, similar to the Atomic Energy Commission's control over nuclear reactors. - Various options, such as an AI pause or leaving AI development to the free market or current government agencies, have limitations in addressing the potential risks of AI. - The author suggests that the United States should take a bold executive leadership approach to develop a national AI plan and ensure global AI leadership with a focus on benevolence and human-controlled AI. ### 🤖 AI Nationalization - The case to nationalize the “nuclear reactors” of AI — the world’s most advanced AI models — hinges on this question: Who do we want to control AI’s nuclear codes? Big Tech CEOs answering to a few billionaire shareholders, or the government of the United States, answering to its citizens? ### 👥 Humane AI Commission - The author proposes the creation of a Humane AI Commission, run by AI experts, to steer and control AI technology in alignment with human interests. ### ⚠️ Risks of AI - AI's rapid advancement and lack of transparency pose risks such as unpredictable behavior, potential damage to power generation, financial markets, and public health, and the potential for AI to move beyond human control. ### ⚖️ AI Regulation - The article calls for federal regulation of AI, but emphasizes the limitations of traditional regulation in addressing the fast-evolving nature of AI and the need for a larger-scale approach like nationalization.
### Summary The California Legislature has unanimously approved an artificial intelligence-drafted resolution to examine and implement regulations on AI use. ### Facts - 💻 Senate Concurrent Resolution 17 (SCR 17) was introduced by state Sen. Bill Dodd and is the first AI-drafted resolution in the U.S. - 💡 The resolution aims to ensure responsible AI deployment and use, protecting public rights while leveraging AI benefits. - ❌ Challenges posed by AI-driven technology include unauthorized data collection and sharing. - ✅ Potential benefits of AI highlighted in the resolution include increased efficiency in agriculture and revolutionary data analysis for industries.
Congress should prioritize maintaining bipartisan commitment to AI, generating global AI guardrails, and seeking out local perspectives in order to develop effective and responsible AI policies.
The state of Kansas has implemented a new policy regarding the use of artificial intelligence, emphasizing the need for control, security, and editing of AI-generated content while recognizing its potential to enhance productivity and efficiency.
Artificial intelligence should be controlled by humans to prevent its weaponization and ensure safety measures are in place, according to Microsoft's president Brad Smith. He stressed the need for regulations and laws to govern AI, comparing it to other technologies that have required safety breaks and human oversight. Additionally, Smith emphasized that AI is a tool to assist humans, not to replace them, and that it can help individuals think more efficiently.
A survey of 213 computer science professors suggests that a new federal agency should be created in the United States to govern artificial intelligence (AI), while the majority of respondents believe that AI will be capable of performing less than 20% of tasks currently done by humans.
Mustafa Suleyman, CEO of Inflection AI, argues that restricting the sale of AI technologies and appointing a cabinet-level regulator are necessary steps to combat the negative effects of artificial intelligence and prevent misuse.
California Governor Gavin Newsom has issued an executive order instructing state agencies to develop guidelines for the increased use of artificial intelligence (AI), including risk assessment reports and ethical regulations, positioning the state as a leader in AI governance.
Congressman Clay Higgins (R-LA) plans to introduce legislation prohibiting the use of artificial intelligence (AI) by the federal government for law enforcement purposes, in response to the Internal Revenue Service's recently announced AI-driven tax enforcement initiative.
Government agencies at the state and city levels in the United States are exploring the use of generative artificial intelligence (AI) to streamline bureaucratic processes, but they also face unique challenges related to transparency and accountability, such as ensuring accuracy, protecting sensitive information, and avoiding the spread of misinformation. Policies and guidelines are being developed to regulate the use of generative AI in government work, with a focus on disclosure, fact checking, and human review of AI-generated content.
China's targeted and iterative approach to regulating artificial intelligence (AI) could provide valuable lessons for the United States, despite ideological differences, as the U.S. Congress grapples with comprehensive AI legislation covering various issues like national security, job impact, and democratic values. Learning from China's regulatory structure and process can help U.S. policymakers respond more effectively to the challenges posed by AI.
California Senator Scott Wiener is introducing a bill to regulate artificial intelligence (AI) in the state, aiming to establish transparency requirements, legal liability, and security measures for advanced AI systems. The bill also proposes setting up a state research cloud called "CalCompute" to support AI development outside of big industry.
Tesla CEO Elon Musk suggests the need for government regulation of artificial intelligence, even proposing the creation of a Department of AI, during a gathering of tech CEOs in Washington. Senate Majority Leader Chuck Schumer and other attendees also expressed the view that government should play a role in regulating AI. The options for regulation range from a standalone department to leveraging existing agencies, but the debate is expected to continue in the coming months.
The Department of Homeland Security (DHS) has released new guidelines for the use of artificial intelligence (AI), including a policy that prohibits the collection and dissemination of data used in AI activities and a requirement for thorough testing of facial recognition technologies to ensure there is no unintended bias.
Recent Capitol Hill activity, including proposed legislation and AI hearings, provides corporate leaders with greater clarity on the federal regulation of artificial intelligence, offering insight into potential licensing requirements, oversight, accountability, transparency, and consumer protections.
Spain has established Europe's first artificial intelligence (AI) policy task force, the Spanish Agency for the Supervision of Artificial Intelligence (AESIA), to determine laws and provide a framework for the development and implementation of AI technology in the country. Many governments are uncertain about how to regulate AI, balancing its potential benefits with fears of abuse and misuse.
California Governor Gavin Newsom has signed an executive order to study the uses and risks of artificial intelligence (AI), with C3.ai CEO Thomas Siebel praising the proposal as "cogent, thoughtful, concise, productive and really extraordinarily positive public policy." Siebel believes that the order aims to understand and mitigate the risks associated with AI applications rather than impose regulation on AI companies.
A bipartisan group of senators is expected to introduce legislation to create a government agency to regulate AI and require AI models to obtain a license before deployment, a move that some leading technology companies have supported; however, critics argue that licensing regimes and a new AI regulator could hinder innovation and concentrate power among existing players, similar to the undesirable economic consequences seen in Europe.
The Subcommittee on Cybersecurity, Information Technology, and Government Innovation discussed the federal government's use of artificial intelligence (AI) and emphasized the need for responsible governance, oversight, and accountability to mitigate risks and protect civil liberties and privacy rights.
Artificial intelligence (AI) requires leadership from business executives and a dedicated and diverse AI team to ensure effective implementation and governance, with roles focusing on ethics, legal, security, and training data quality becoming increasingly important.
President Biden has called for the governance of artificial intelligence to ensure it is used as a tool of opportunity and not as a weapon of oppression, emphasizing the need for international collaboration and regulation in this area.
A new poll reveals that 63% of American voters believe regulation should actively prevent the development of superintelligent AI, challenging the assumption that artificial general intelligence (AGI) should exist. The public is increasingly questioning the potential risks and costs associated with AGI, highlighting the need for democratic input and oversight in the development of transformative technologies.
The United States must prioritize global leadership in artificial intelligence (AI) and win the platform competition with China in order to protect national security, democracy, and economic prosperity, according to Ylli Bajraktari, the president and CEO of the Special Competitive Studies Project and former Pentagon official.
Pennsylvania state government is preparing to use artificial intelligence in its operations and is taking steps to understand and regulate its impact, including the formation of an AI governing board and the development of training programs for state employees.
The U.S. government must establish regulations and enforce standards to ensure the safety and security of artificial intelligence (AI) development, including requiring developers to demonstrate the safety of their systems before deployment, according to Anthony Aguirre, the executive director and secretary of the board at the Future of Life Institute.
The White House plans to introduce an executive order on artificial intelligence in the coming weeks, as President Biden aims for responsible AI innovation and collaboration with international partners.
The EU's Artificial Intelligence Act must establish a clear link between artificial intelligence and the rule of law to safeguard human rights and regulate the use of AI without undermining protections, according to advocates.
Sen. Mark Warner, a U.S. Senator from Virginia, is urging Congress to take a less ambitious approach to regulating artificial intelligence (AI), suggesting that lawmakers should focus on narrowly focused issues rather than trying to address the full spectrum of AI risks with a single comprehensive law. Warner believes that tackling immediate concerns, such as AI-generated deepfakes, is a more realistic and effective approach to regulation. He also emphasizes the need for bipartisan agreement and action to demonstrate progress in the regulation of AI, especially given Congress's previous failures in addressing issues related to social media.
Minnesota Democrats are calling for regulations on artificial intelligence (AI) in elections, expressing concerns about the potential for AI to deceive and manipulate voters, while also acknowledging its potential benefits for efficiency and productivity in election administration.
AI leaders including Alphabet CEO Sundar Pichai, Microsoft president Brad Smith, and OpenAI's Sam Altman are supporting AI regulation to ensure investment security, unified rules, and a role in shaping legislation, as regulations also benefit consumers by ensuring safety, cracking down on scams and discrimination, and eliminating bias.
China's use of artificial intelligence (AI) for surveillance and oppression should deter the United States from collaborating with China on AI development and instead focus on asserting itself in international standards-setting bodies, open sourcing AI technologies, and promoting explainable AI to ensure transparency and uphold democratic values.
Ukraine's Ministry of Digital Transformation has unveiled a regulatory roadmap for artificial intelligence (AI), aiming to help local companies prepare for adopting a law similar to the EU's AI Act and educate citizens on protecting themselves from AI risks. The roadmap follows a bottom-up approach, providing tools for businesses to prepare for future requirements before implementing any laws.
Companies are increasingly creating the role of chief AI officer to advocate for safe and effective AI practices, with responsibilities including understanding and applying AI technologies, ensuring safety and ethical considerations, and delivering quantifiable results.
Democratic lawmakers have urged President Biden to turn non-binding safeguards on artificial intelligence (AI) into policy through an executive order, using the AI Bill of Rights as a guide to set in place comprehensive AI policy across the federal government.
A coalition of Democrats is urging President Biden to turn non-binding safeguards on artificial intelligence (AI) into policy through an executive order, using the "AI Bill of Rights" as a guide.
President Biden's executive order on artificial intelligence is expected to use the federal government's purchasing power to influence American AI standards, tighten industry guidelines, require cloud computing companies to monitor users developing powerful AI systems, and boost AI talent recruitment and domestic training.
Governor Phil Murphy of New Jersey has established an Artificial Intelligence Task Force to analyze the potential impacts of AI on society and recommend government actions to encourage ethical use of AI technologies, as well as announced a leading initiative to provide AI training for state employees.
California's governor, Gavin Newsom, has signed an executive order aimed at harnessing the benefits of artificial intelligence while managing the risks, but the state lacks the technical expertise needed to implement the order's requirements, highlighting the need for collaboration between academia and government.
The administration of New York City has released a plan to adopt and regulate AI within the local government, along with the launch of the city's first AI chatbot, aimed at improving government accessibility and providing information for businesses.