OpenAI Forms Preparedness Team to Monitor Dangers of AI Like ChatGPT
• OpenAI lays out plan for "Preparedness" team to monitor dangers of its AI like ChatGPT, led by MIT professor Aleksander Madry
• Team will hire experts to continually test tech and warn if capabilities become dangerous, like instructing chemical weapons
• Situated between existing "Safety" team and future-looking "Superalignment" team researching AI that surpasses humans
• Debate in tech community about how dangerous AI could be; OpenAI threads middle ground in public stance
• Allowing "qualified, independent third-parties" to test technology to stay ahead of potential issues