Search engines required to stamp out AI-generated images of child abuse under Australia’s new code
-
New industry code in Australia requires search engines like Google and Bing to eliminate child abuse material from results, including AI-generated deepfakes.
-
Code prompted by concerns that new AI tools like ChatGPT could be used to generate illegal content like child abuse images and terrorist propaganda.
-
Companies must research technologies to help users detect deepfakes, and regularly improve AI systems to prevent illegal content in results.
-
Australia's eSafety Commissioner says new rules compel tech firms to not just reduce harms but build safety tools into services from the start.
-
Separate AFP initiative is using AI to detect child abuse material, asking adults to submit childhood photos to help train the AI system.