DeepMind Forms New AI Safety Group to Address Harms Like Bias and Misinformation
-
Google forms new AI Safety and Alignment organization within DeepMind to focus on AI safety issues like preventing harmful medical advice and bias amplification.
-
Organization includes new team working on safety for artificial general intelligence (AGI), joining existing group Scalable Alignment.
-
Anca Dragan, formerly of Waymo, will lead the group while still spending time at UC Berkeley leading an AI safety lab.
-
Skepticism and concern about AI harms like misinformation and unreliability remains high among the public and enterprises.
-
Dragan acknowledges AI safety challenges as intractable but says DeepMind aims to invest more in safety frameworks, uncertainty estimates, monitoring, and other safeguards.