Tech Giants Form AI Safety Group Amid Concerns Over Risks of New AI Systems
-
Google, Microsoft, OpenAI, and Anthropic formed an industry group called Frontier to promote AI safety with $10M in funding.
-
The fund will support research into safely developing and evaluating AI models.
-
Critics say the fund seems mostly aimed at promoting AI capabilities while minimizing risks.
-
Watchdog groups warn AI art generators are being used to create illegal child sexual abuse images.
-
Companies claim to support AI regulation but seem to prefer industry-led oversight that won't impede AI progress.