OpenAI Launches New Team to Assess and Mitigate Risks of Advanced AI
-
OpenAI created a new team called Preparedness to assess and protect against risks from future AI systems. The team will look at threats like AI's ability to fool humans and generate malicious code.
-
Preparedness will also study more speculative threats like chemical, biological, radiological and nuclear risks from AI models.
-
OpenAI is soliciting ideas for risk studies from the community, with prizes for the top submissions.
-
The Preparedness team will formulate policies and tools for risk monitoring, mitigation, and governance during AI model development.
-
This effort complements OpenAI's other work in AI safety, focused on risks from highly capable AI systems that could exceed human intelligence.