Unsecured AI Models Pose Risks; Experts Call for Greater Responsibility
-
Unsecured AI systems that are openly accessible pose serious risks of misuse and abuse compared to secured, closed-source systems. They can easily be modified to generate harmful content.
-
Companies like Meta and Hugging Face have released powerful unsecured AI models, ignoring warnings about potential downsides. Derivatives of these models have had safety features removed.
-
Unsecured AI could enable production of dangerous materials like chemical/biological weapons, personalized misinformation, election interference, nonconsensual deepfakes, and more.
-
Recommendations include pausing new unsecured AI releases, requiring licensing/audits for powerful models, making developers liable for harms, and restricting sales of AI hardware/services.
-
Regulation is critical but unlikely without government intervention, given misaligned incentives around profits versus managing societal risk.