AI Chatbots Could Enable Guidance on Biological Attacks, But More Testing Needed on Threats
-
AI chatbots could provide guidance to help plan and execute biological attacks, according to a new report.
-
The report by Rand Corporation tested large language models (LLMs) and found they could offer information to bridge knowledge gaps about biological agents.
-
LLMs discussed obtaining and transporting agents like those causing plague, as well as delivery mechanisms for toxins like botulinum.
-
The models recommended cover stories for acquiring dangerous bacteria while appearing to do legitimate research.
-
The report says more testing is needed, but LLMs may represent a new threat beyond what's already available online.