1. Home
  2. >
  3. AI 🤖
Posted

Search engines required to stamp out AI-generated images of child abuse under Australia’s new code

  • New industry code in Australia requires search engines like Google and Bing to eliminate child abuse material from results, including AI-generated deepfakes.

  • Code prompted by concerns that new AI tools like ChatGPT could be used to generate illegal content like child abuse images and terrorist propaganda.

  • Companies must research technologies to help users detect deepfakes, and regularly improve AI systems to prevent illegal content in results.

  • Australia's eSafety Commissioner says new rules compel tech firms to not just reduce harms but build safety tools into services from the start.

  • Separate AFP initiative is using AI to detect child abuse material, asking adults to submit childhood photos to help train the AI system.

theguardian.com
Relevant topic timeline:
Generative AI models like ChatGPT pose risks to content and data privacy, as they can scrape and use content without attribution, potentially leading to loss of traffic, revenue, and ethical debates about AI innovation. Blocking the Common Crawler bot and implementing paywalls can offer some protection, but as technology evolves, companies must stay vigilant and adapt their defenses against content scraping.
Generative AI tools are causing concerns in the tech industry as they produce unreliable and low-quality content on the web, leading to issues of authorship, incorrect information, and potential information crisis.
Attorneys general from all 50 states have called on Congress to establish protective measures against AI-generated child sexual abuse images and expand existing restrictions on such materials. They argue that the government needs to act quickly to prevent the potentially harmful use of AI technology in creating child exploitation material.
Australia's internet regulator has drafted a new code that requires search engines like Google and Bing to prevent the sharing of child sexual abuse material created by artificial intelligence, and also prohibits the AI functions of search engines from producing deepfake content.
Paedophiles are using open source AI models to create child sexual abuse material, according to the Internet Watch Foundation, raising concerns about the potential for realistic and widespread illegal content.
A surge in AI-generated child sexual abuse material (CSAM) circulating online has been observed by the Internet Watch Foundation (IWF), raising concerns about the ability to identify and protect real children in need. Efforts are being made by law enforcement and policymakers to address the growing issue of deepfake content created using generative AI platforms, including the introduction of legislation in the US to prevent the use of deceptive AI in elections.
The generative AI boom has led to a "shadow war for data," as AI companies scrape information from the internet without permission, sparking a backlash among content creators and raising concerns about copyright and licensing in the AI world.
Large corporations are grappling with the decision of whether to embrace generative AI tools like ChatGPT due to concerns over copyright and security risks, leading some companies to ban internal use of the technology for now; however, these bans may be temporary as companies explore the best approach for responsible usage to maximize efficiency without compromising sensitive information.