Experts Urge Caution on AI Safety Rules to Avoid Stifling Innovation
-
Governments are considering measures to address speculative extreme risks from AI like AI-powered weapons and cyberattacks. However, these risks are still theoretical and not well understood.
-
Hasty regulation could be counterproductive, entrench big tech firms, and stifle innovation in AI. Open-source models may also be constrained.
-
More study is needed to understand AI risks before setting rules or institutions. Bodies like the OECD should collaborate to research AI safety issues.
-
Governments could agree to a voluntary code of conduct for AI model makers to share risk management practices. Makers of open-source models should also participate.
-
Eventual regulation may resemble regimes for technologies like nuclear and bioengineering. But developing effective rules will take time and deliberation.