AI Researchers Call for Safe Harbor to Allow Independent Safety Testing of Closed Systems
-
Over 100 top AI researchers signed a letter calling on AI companies to allow independent evaluations of their systems, arguing opaque rules prevent safety testing.
-
Researchers fear having accounts banned or lawsuits for testing AI systems without a company's approval.
-
The letter asks companies like OpenAI, Meta, and Midjourney to provide legal and technical safe harbor for interrogating their products.
-
AI companies are growing aggressive at shutting out auditors, with OpenAI claiming NYT's ChatGPT searches were "hacking" and Meta revoking licenses if IP infringement is alleged.
-
In addition to safe harbor, researchers say companies should provide direct channels for them to report vulnerabilities, rather than relying on public shaming which adversarializes the relationship.