1. Home
  2. >
  3. AI 🤖
Posted

Pennsylvania Launches Initiative to Responsibly Adopt AI in State Government Under Governor Shapiro

  • Pennsylvania state government will prepare to use AI in operations under Governor Josh Shapiro.

  • Shapiro is convening an AI governing board, publishing AI principles, and developing employee training programs.

  • The state will start a two-year fellowship to recruit AI experts to help agencies adopt the technology.

  • Public safety agencies are already consulting AI experts regarding potential threats like fraud.

  • The governing board will guide AI development, purchasing, and use with help from Carnegie Mellon.

go.com
Relevant topic timeline:
### Summary The rapid advancement of artificial intelligence (AI) presents both beneficial possibilities and concerning risks, as experts warn about potential negative impacts including the threat of extinction. Government and industry efforts are being made to manage these risks and regulate AI technology, while also addressing concerns about misinformation, bias, and the need for societal literacy in understanding AI. ### Facts - The use of AI is rapidly growing in various areas such as health care, the workplace, education, arts, and entertainment. - The Center for AI Safety (CAIS) issued a warning signed by hundreds of individuals, including tech industry leaders and scientists, about the need to prioritize mitigating the risks of AI alongside global-scale dangers like pandemics and nuclear war. - CEO of OpenAI, Sam Altman, expressed both the benefits and concerns associated with AI technology, emphasizing the need for serious consideration of its risks. - Some experts believe that the warnings about potential risks from AI are more long-term scenarios rather than immediate doomsday situations, and caution against the hype surrounding AI. - The National Council of State Legislatures is working on regulating AI at the state level, with several states already introducing AI bills and forming advisory groups. - State legislators aim to define responsible AI utilization by governments and protect constituents engaging with AI in the private sector. - The federal government is establishing National Artificial Intelligence Research Institutes to invest in long-term AI research. - Misinformation and disinformation are concerns related to AI, as certain AI algorithms can generate biased and inaccurate information. - OpenAI acknowledges the potential for AI tools to contribute to disinformation campaigns and is collaborating with researchers and industry peers to address this issue. - The NCSL report highlights the need for policymakers to understand the programming decisions behind AI systems and their potential impact on citizens. - Society lacks the ability and literacy to distinguish truth from false information, leading to the proliferation and belief in generative misinformation. ### 🤖 AI - The use of artificial intelligence is rapidly advancing across various fields. - Concerns have been raised about the potential risks and negative impacts of AI. - Government and industry efforts are underway to manage AI risks and regulate the technology. - Misinformation, bias, and the lack of societal literacy in understanding AI are additional challenges.
Wisconsin has established a task force to study the impact of artificial intelligence on the state's workforce, following a trend among other states. The task force, comprised of government leaders, educational institutions, and representatives from various sectors, aims to gather information and create an action plan to understand and adapt to the transformations brought about by AI.
As calls for regulation of artificial intelligence (A.I.) grow, history suggests that implementing comprehensive federal regulation of advanced A.I. systems in the U.S. will likely be a slow process, given Congress's historical patterns of responding to revolutionary technologies.
The use of AI algorithms by insurance companies to assess claims is raising concerns about potential bias and lack of human oversight, leading Pennsylvania legislators to propose legislation that would regulate the use of AI in claims processing.
The state of Kansas has implemented a new policy regarding the use of artificial intelligence, emphasizing the need for control, security, and editing of AI-generated content while recognizing its potential to enhance productivity and efficiency.
A school district in Georgia has implemented an AI-driven curriculum that incorporates artificial intelligence into classrooms from kindergarten to high school, aiming to prepare students for the challenges and opportunities of the technology, with students already showing enthusiasm and proficiency in using AI tools.
The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
New Hampshire schools are considering the role of AI in the classroom and are planning lessons on the proper and ethical use of generative artificial intelligence programs, which can provide information in seconds but must be used responsibly. The state is working on implementing policies to ensure the technology enhances productivity and instruction while protecting students.
Some companies in the Phoenix area are hiring due to the implementation of artificial intelligence (AI), challenging the notion that AI will replace human workers and negatively impact the job market.
A survey of 213 computer science professors suggests that a new federal agency should be created in the United States to govern artificial intelligence (AI), while the majority of respondents believe that AI will be capable of performing less than 20% of tasks currently done by humans.
California Governor Gavin Newsom has issued an executive order instructing state agencies to develop guidelines for the increased use of artificial intelligence (AI), including risk assessment reports and ethical regulations, positioning the state as a leader in AI governance.
Tech industry lobbyists are turning their attention to state capitals in order to influence AI legislation and prevent the imposition of stricter rules across the nation, as states often act faster than Congress when it comes to tech issues; consumer advocates are concerned about the industry's dominance in shaping AI policy discussions.
Congressman Clay Higgins (R-LA) plans to introduce legislation prohibiting the use of artificial intelligence (AI) by the federal government for law enforcement purposes, in response to the Internal Revenue Service's recently announced AI-driven tax enforcement initiative.
Government agencies at the state and city levels in the United States are exploring the use of generative artificial intelligence (AI) to streamline bureaucratic processes, but they also face unique challenges related to transparency and accountability, such as ensuring accuracy, protecting sensitive information, and avoiding the spread of misinformation. Policies and guidelines are being developed to regulate the use of generative AI in government work, with a focus on disclosure, fact checking, and human review of AI-generated content.
Countries around the world, including Australia, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the UK, the UN, and the US, are taking various steps to regulate artificial intelligence (AI) technologies and address concerns related to privacy, security, competition, and governance.
China's targeted and iterative approach to regulating artificial intelligence (AI) could provide valuable lessons for the United States, despite ideological differences, as the U.S. Congress grapples with comprehensive AI legislation covering various issues like national security, job impact, and democratic values. Learning from China's regulatory structure and process can help U.S. policymakers respond more effectively to the challenges posed by AI.
California Senator Scott Wiener is introducing a bill to regulate artificial intelligence (AI) in the state, aiming to establish transparency requirements, legal liability, and security measures for advanced AI systems. The bill also proposes setting up a state research cloud called "CalCompute" to support AI development outside of big industry.
The Department of Homeland Security (DHS) has announced new policies for the use of artificial intelligence (AI) to secure the border, prioritizing rigorous testing, safeguarding privacy, and avoiding biases, while also allowing Americans to decline the use of facial recognition technology in certain situations.
Recent Capitol Hill activity, including proposed legislation and AI hearings, provides corporate leaders with greater clarity on the federal regulation of artificial intelligence, offering insight into potential licensing requirements, oversight, accountability, transparency, and consumer protections.
A bipartisan group of senators is expected to introduce legislation to create a government agency to regulate AI and require AI models to obtain a license before deployment, a move that some leading technology companies have supported; however, critics argue that licensing regimes and a new AI regulator could hinder innovation and concentrate power among existing players, similar to the undesirable economic consequences seen in Europe.
Educators in the Sacramento City Unified District are monitoring students' use of artificial intelligence (AI) on assignments and have implemented penalties for academic misconduct, while also finding ways to incorporate AI into their own teaching practices.
The Subcommittee on Cybersecurity, Information Technology, and Government Innovation discussed the federal government's use of artificial intelligence (AI) and emphasized the need for responsible governance, oversight, and accountability to mitigate risks and protect civil liberties and privacy rights.
Governments worldwide are grappling with the challenge of regulating artificial intelligence (AI) technologies, as countries like Australia, Britain, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the United Nations, and the United States take steps to establish regulations and guidelines for AI usage.
Pennsylvania Governor Josh Shapiro signed an executive order establishing standards and a governance framework for the use of artificial intelligence (AI) by state agencies, as well as creating a Generative AI Governing Board and outlining core values to govern AI use. The order aims to responsibly integrate AI into government operations and enhance employee job functions.
The U.S. government must establish regulations and enforce standards to ensure the safety and security of artificial intelligence (AI) development, including requiring developers to demonstrate the safety of their systems before deployment, according to Anthony Aguirre, the executive director and secretary of the board at the Future of Life Institute.
There is a need for more policy balance in discussions about artificial intelligence (AI) to focus on the potential for good and how to ensure societal benefit, as AI has the potential to advance education, national security, and economic success, while also providing new economic opportunities and augmenting human capabilities.
Minnesota Democrats are calling for regulations on artificial intelligence (AI) in elections, expressing concerns about the potential for AI to deceive and manipulate voters, while also acknowledging its potential benefits for efficiency and productivity in election administration.
Artificial intelligence (AI) has the potential to disrupt industries and requires the attention of boards of directors to consider the strategic implications, risks, compliance, and governance issues associated with its use.
China's use of artificial intelligence (AI) for surveillance and oppression should deter the United States from collaborating with China on AI development and instead focus on asserting itself in international standards-setting bodies, open sourcing AI technologies, and promoting explainable AI to ensure transparency and uphold democratic values.
Ukraine's Ministry of Digital Transformation has unveiled a regulatory roadmap for artificial intelligence (AI), aiming to help local companies prepare for adopting a law similar to the EU's AI Act and educate citizens on protecting themselves from AI risks. The roadmap follows a bottom-up approach, providing tools for businesses to prepare for future requirements before implementing any laws.
Companies are increasingly creating the role of chief AI officer to advocate for safe and effective AI practices, with responsibilities including understanding and applying AI technologies, ensuring safety and ethical considerations, and delivering quantifiable results.
Georgia's school district is incorporating AI into their curriculum from kindergarten onwards, while Pennsylvania high schools are prioritizing career education due to a shortage of skilled trade workers.
AI technology has advanced rapidly, bringing both positive and negative consequences such as improved accuracy and potential risks to the economy, national security, and various industries, requiring government regulation and ethical considerations to prevent misuse and protect human values.
Governor Phil Murphy of New Jersey has established an Artificial Intelligence Task Force to analyze the potential impacts of AI on society and recommend government actions to encourage ethical use of AI technologies, as well as announced a leading initiative to provide AI training for state employees.
New York City has launched its first-ever Artificial Intelligence Action Plan, aimed at evaluating AI tools and associated risks, building AI knowledge among city government employees, and responsibly implementing AI technology in various sectors.
The administration of New York City has released a plan to adopt and regulate AI within the local government, along with the launch of the city's first AI chatbot, aimed at improving government accessibility and providing information for businesses.
New York City has unveiled an AI action plan aimed at understanding and responsibly implementing the technology, with steps including the establishment of an AI Steering Committee and engagement with outside experts and the public.
Lawmakers in the US are starting a series of hearings on the role of artificial intelligence (AI), focusing on concerns around data collection and use by AI systems as the industry continues to expand and regulations are considered; experts and witnesses will provide testimony on the subject, including former FTC Chair Jon Leibowitz and actor Clark Gregg.