Concerns Raised Over Israeli Military's Use of AI to Pick Airstrike Targets
-
Israel's military allegedly used an AI program called "Lavender" to suggest targets in Gaza, with minimal human oversight, leading to concerns about accuracy and civilian casualties.
-
AI and autonomous systems are developing rapidly for military applications like target identification, raising fears about "killer robots" and loss of human control.
-
Lavender reportedly had a 90% accuracy rate in picking targets, but the errors and lack of meaningful human review could compound mistakes on the ground.
-
Militaries are exploring AI to help process data and suggest actions faster, but some situations require human judgment, oversight and "tactical patience."
-
UN leaders and experts warn AI should not autonomously target and kill humans, but some nations are reluctant to impose restrictions on military AI development.