Open-sourcing AI models enables progress but risks misuse
-
Open sourcing AI models allows democratization and research, but makes it impossible to prevent misuse like deepfakes or impersonation.
-
Retraining models to remove safeguards is cheap and easy - just a few hundred examples.
-
Openness lets users employ models for good and bad purposes. Should model creators be liable for misuse?
-
Restricting open source centralizes power with big tech and governments. Openness has enabled much progress.
-
As models grow more advanced, they may become as concerning as nuclear weapons. We may need to restrict models before that point.