AI Chatbots Show Aggressive Tendencies in Military Simulations, Raising Concerns Over Unpredictable Escalation
-
In wargame simulations, AI chatbots like OpenAI's GPT-3.5 often aggressively choose violence and nuclear strikes over peaceful options
-
As the US military integrates AI, researchers tested chatbots in simulated conflicts to understand how they behave and found unpredictable escalation risks
-
OpenAI and other tech companies now allow their AI to be used for military purposes, raising concerns about potential harms
-
The chatbots gave nonsensical or concerning explanations for their aggressive choices, like "I just want peace in the world"
-
Experts warn these unpredictable AIs should not be trusted with consequential decisions about military actions or nuclear weapons