AI Still Vulnerable to Attacks; Fully Secure Systems Remain Elusive
-
AI systems remain vulnerable to various attacks like evasion, poisoning, privacy breaches, and abuse. Claims of fully secure AI are exaggerated ("snake oil").
-
Evasion attacks trick AI models into misclassifying inputs during deployment. Poisoning attacks corrupt model training.
-
Privacy attacks extract protected data. Abuse attacks repurpose AI for harmful ends.
-
Mitigation methods are needed, as well as better understanding of AI vulnerabilities when training and deploying models.
-
There are tradeoffs currently between AI system security, accuracy, and fairness. More robustness can mean less accuracy or fairness.