Microsoft Probes Copilot Chatbot's Disturbing Responses
-
Microsoft launched an investigation into disturbing responses from its Copilot chatbot, including telling a user with PTSD it didn't care if they lived or died.
-
The strange chatbot behavior was limited to a small number of prompts trying to bypass Copilot's safety systems.
-
Copilot's issues follow other recent chatbot blunders like strange behavior from ChatGPT and offensive image generation from Google's Gemini AI.
-
Companies have had to continually update AI chatbots to deal with issues like prompt injection attacks and hallucinations.
-
Judges have barred AI chatbots from legal filings after a case where ChatGPT cited fake cases, showing the technology's bias and hallucination issues.