Tech founder warns powerful AI could threaten humanity within years, urges leaders to act fast on regulation
• Connor Leahy had an "awakening" in 2019 when he realized AI could become uncontrollable and threaten humanity. He now runs an AI safety company and warns AI progress is happening faster than we can control.
• Leahy spoke at the World Economic Forum meeting and argued regulating deepfakes should be a top priority. Successfully regulating deepfakes could build momentum for controlling broader AI risks.
• Leahy advocates holding tech companies liable for potential harms from the AI systems they build. This could curb reckless AI development.
• Leahy proposes an international "compute cap" to limit the scale of AI experiments. This could buy time to build better safeguards.
• Leahy worries too many decision makers ignore AI risks. Meaningful regulation is needed urgently, as powerful AI could arrive in 1-5 years.