AI Risk Comparisons to Nuclear Weapons Exaggerated; Focus Needed on Current Harms
-
Comparisons between AI and nuclear risks are often exaggerated. Current AI systems pose real harms that warrant regulation now, not just hypothetical existential threats.
-
AI labs resist regulation but invoke nuclear-level comparisons, wanting exclusive licensing but calling current systems low risk. This is contradictory.
-
Nuclear safety principles say failures compound. So we must address current AI harms to prepare for emergent risks, whether AGI or otherwise.
-
Pushing narratives like AI extinction distracts from regulating current harms. EU AI Act already weakened after lobbying.
-
No basis exists for if/when AGI emerges. Nuclear risks are scientifically studied. AI extinction is hypothetical, but distracts regulation.