OpenAI Forms Safety Board to Assess AI Risks, Can Veto Risky Systems
-
OpenAI is expanding safety processes with a new advisory group to assess AI risks and give recommendations.
-
The board now has veto power over risky AI systems, though unclear if they will use it.
-
Models are rated on 4 risk types (cybersecurity, persuasion/disinformation, autonomy, CBRN). High risks can't be deployed.
-
Recommendations go to leadership and board simultaneously. Board can override decisions.
-
Unclear if critical risks would be disclosed publicly or how transparent the process is.