ChatGPT Often Gives Inaccurate Antibiotic Warnings, But AI Models Are Improving for Medical Info
-
Artificial intelligence (AI) chatbot ChatGPT recently provided incorrect or fabricated responses for 29 of 41 antibiotics queried about FDA boxed warnings.
-
In testing, ChatGPT only matched the actual FDA boxed warnings for 12 of 41 antibiotics, a 29% accuracy rate.
-
Researchers warn uncritical use of ChatGPT for medical info is risky as it is designed for language processing, not scientific accuracy.
-
However, AI tools like ChatGPT are advancing rapidly, with current versions scoring better than 90% of medical board candidates.
-
Researchers plan to compare ChatGPT's accuracies between versions and with other AI models like Google's Med-PaLM.