Study Finds AI Chatbots Like ChatGPT Perpetuate Racial Bias in Medical Advice
-
AI chatbots like ChatGPT perpetuate false, racist medical ideas about Black patients when asked medical questions, Stanford researchers find.
-
Incorrect answers included fabricated, race-based differences in skin thickness, lung capacity, kidney function.
-
Such AI biases could lead to real-world medical harms if chatbots are used for tasks like emailing patients.
-
Patients already asking chatbots to help diagnose skin lesions, other symptoms, concerning doctors.
-
Researchers urge more testing of commercial AI systems to ensure they are fair, equitable before use in medical settings.