Researchers Warn Data Poisoning Attacks Could Mislead AI Chatbots for Just $60
-
Researchers found that for just $60, malicious actors could poison the data that AI chatbots are trained on to provide inaccurate answers.
-
They could buy expired domains and fill them with misinformation, which then gets scraped into AI training data sets.
-
Attackers could also strategically time Wikipedia edits to add junk information right before snapshots are taken for AI training data.
-
The attacks don't require much technical sophistication and allow the attackers to influence how AI models behave.
-
The researcher is especially concerned about future AI tools that can take actions in the real world, which could be hijacked through data poisoning.