Posted 3/1/2024, 6:02:29 PM
Malicious AI Models With Backdoors Uploaded to Hugging Face, Evading Security Scanners
- Malicious code was found in models uploaded to Hugging Face that could install backdoors on user devices
- One model opened a reverse shell giving full control of a device to a remote attacker
- The malicious code evaded Hugging Face's malware scanner using the pickle module
- The model was submitted by a user with the name "baller432"
- Hugging Face has since removed the malicious models after being notified