OpenAI Forms Safety Team Despite Pursuing Risky AGI Development
-
OpenAI created a new team to mitigate risks of AI, including nuclear threats
-
The preparedness team will track and protect against AI dangers like persuasion and cybersecurity issues
-
OpenAI says they take AI safety seriously, but their actions seem contradictory
-
CEO Sam Altman has expressed anxiety about dangerous AI, yet OpenAI is building AGI
-
Strange that OpenAI frets about AI catastrophe while actively developing the tech that could cause it