Google's Hassabis Calls for IPCC-Like Body to Oversee AI Safety, Citing Risks Like Bioweapons
-
Demis Hassabis of Google's DeepMind says AI risks must be treated as seriously as climate change. He suggests starting with an IPCC-like body for AI oversight.
-
Hassabis warns that AI could aid in creating bioweapons and pose an existential threat if super-intelligent systems are developed.
-
The UK is hosting the first AI safety summit in November with leaders in AI like Hassabis attending. The summit will focus on threats like bioweapons.
-
Hassabis says current AI systems pose little risk but future, more advanced ones may require oversight on uses.
-
Hassabis is optimistic about AI's potential but says a "middle way" is needed to manage the technology responsibly.