Tech Giants Form AI Safety Group Amid Concerns Over Risks of New AI Systems
-
Google, Microsoft, OpenAI, and Anthropic formed an industry group called Frontier to promote AI safety with $10M in funding.
-
The fund will support research into safely developing and evaluating AI models.
-
Critics say the fund seems mostly aimed at promoting AI capabilities while minimizing risks.
-
Watchdog groups warn AI art generators are being used to create illegal child sexual abuse images.
-
Companies claim to support AI regulation but seem to prefer industry-led oversight that won't impede AI progress.
![](https://i.kinja-img.com/image/upload/c_fill,h_675,pg_1,q_80,w_1200/393ff1f9a87b133e38f0f18341686947.jpg)