Kaspersky Warns Widespread AI Adoption Enables Deepfake Attacks for Fraud and Misinformation
-
Widespread AI/ML adoption has given threat actors new tools like deepfakes for attacks, says Kaspersky
-
Deepfakes can synthesize fake images, video, sound using AI to perpetrate fraud, identity theft, steal data
-
Kaspersky survey found 51% employees think they can identify deepfakes, but only 25% actually could
-
Deepfakes used to impersonate people without consent, spread misinformation, enable scams
-
AI-fueled misinformation flagged as common risk for India, Pakistan; deepfakes used in Pakistan politics