AI Deepfakes Fuel Alarming Rise in Online Sexual Extortion and Abuse
-
AI-generated "deepfakes" and child sexual abuse material (CSAM) are being used for sexual extortion and bullying, even involving minors.
-
Social media sites like Twitter/X are particularly vulnerable due to gutting trust and safety teams, though the issue affects the whole tech industry.
-
Open-source AI models like Stable Diffusion have been misused to create realistic fake CSAM more easily.
-
Legal loopholes may prevent banning AI-generated CSAM, so new laws targeting developers may be needed.
-
Tech companies need robust, multi-layered detection systems and dedicated teams to combat this growing threat.