Taylor Swift Images Removed Over Harms
-
Explicit AI-generated images of Taylor Swift went viral on social media sites like Reddit and Facebook before spreading to platform X. One image on X got 47 million views.
-
X, Meta, and AI platforms like OpenAI have policies against nonconsensual nude images and worked to remove the content, but critics argue their business models often rely on viral sharing first.
-
Deepfakes are created through AI tools that can realistically fake videos and images of people, often for nonconsensual pornographic purposes. Over 96% of deepfakes target women.
-
Legislation criminalizing malicious deepfakes varies globally. Some US states ban them, and countries like the UK and South Korea have restrictions. But regulations balance harms against technological progress.
-
Swift's fanbase and lawmakers mobilized against the images, arguing tech platforms must better enforce their rules against harms like nonconsensual intimate imagery. The White House called the incident "alarming."