AI Transparency Falters as New Models Stay Secretive
-
AI models like GPT-4 are becoming more capable but companies like OpenAI are keeping key details secret. This reduces accountability and safety.
-
A Stanford study scored 10 major AI models on 13 transparency criteria. Even "open source" models scored low, with the most transparent only reaching 54%.
-
Secrecy contrasts with the openness of the previous AI boom, which helped progress in speech and image recognition.
-
Experts say more openness is needed to advance AI as a scientific field. AI2 is releasing an open model called OLMo to set an example.
-
Lack of access to training data makes it hard to understand model behavior. More openness could improve safety as deployment grows.