OpenAI Unveils New Framework to Monitor and Mitigate AI Risks
-
OpenAI has created a "Preparedness Framework" to track, evaluate, and mitigate risks from advanced AI models like catastrophic system disruptions or assisting in weapons creation.
-
A dedicated team, led by an MIT professor, will monitor AI risks and create scorecards categorizing them as low, medium, high, or critical.
-
Only low or medium-risk AI models can be deployed under the framework; high-risk ones can only be developed further with additional scrutiny.
-
OpenAI's board has final veto power over releasing risky AI models, highlighting its unusual governance structure.
-
The framework arrives amid debates about AI existential risk; OpenAI's CEO signed an open letter likening AI risk to nuclear war and pandemics.