AI Detection of Online Child Abuse Stifled by Privacy Laws
-
Social media companies rely on AI to detect child sexual abuse material, but many of these AI-generated reports can't be opened by law enforcement without a warrant, causing investigation delays.
-
U.S. law prevents law enforcement from opening AI-generated abuse reports unless they've been reviewed by a human first, due to privacy protections.
-
Delays in investigations allow alleged predators to go undetected longer, risking more potential victims. Disabling accounts also can remove evidence before police can access it.
-
Prosecutors say most major social media tips they receive are not actionable because AI generated them and they lack specifics needed for a warrant.
-
Child safety experts argue AI is only effective at finding previously known abuse images; human moderators are needed to identify new victims and evolving material.