Cincinnati Police Test AI Robots to Cut Crime, But Critics Worry About Racial Bias and Loss of Privacy
-
Cincinnati police are testing AI security robots made by Knightscope to reduce crime, but these machines may reinforce racial biases and over-police minority and poor communities.
-
The robots' constant surveillance recalls Foucault's panopticon concept and may pressure people to self-discipline, but at the cost of privacy and accountability.
-
AI policing algorithms often exhibit racial bias, yet are trained on predominantly white male data, potentially worsening discrimination.
-
Removing human accountability, these robots allow police to deflect blame and criticism to the machines rather than themselves.
-
Given civil rights concerns, police departments like Chicago have suspended predictive policing AIs, and the risks likely outweigh any crime reduction benefits.