AI Safety in Focus as Leading Firms Make Advances and Missteps
-
DeepMind released a paper proposing a framework for evaluating AI systems' societal and ethical risks. The timing coincides with an upcoming AI Safety Summit.
-
A Stanford study ranked major AI models on transparency criteria. Google's PaLM 2 scored poorly, raising questions about DeepMind's commitment.
-
Microsoft research found GPT-4 can be more easily prompted to generate toxic text compared to other models.
-
OpenAI launched ChatGPT's web searching feature and transitioned DALL-E 3 into beta. Competitors released free alternatives ahead of GPT-4V.
-
Meta is progressing in reading minds through brain decoding, reconstructing visual perception from brain scans. DeepMind says we should evaluate whether this is ethical.