Meta's Oversight Board Probes Handling of AI-Generated Explicit Images
-
Meta's Oversight Board is investigating how Facebook and Instagram handled cases of explicit AI-generated images of public figures that violated policies.
-
In one case, Instagram failed to remove an AI-generated nude image of an Indian celebrity after multiple user reports.
-
In another case, Facebook did successfully remove an explicit AI-generated image resembling a US public figure.
-
The article discusses concerns around deepfakes and online gender-based violence, especially in India.
-
The board seeks public comments on harms of deepfake porn and potential issues with Meta's approach to detecting AI-generated explicit images.