1. Home
  2. >
  3. Technology 🛠️
Posted

Australia requires search engines to block AI-generated child abuse content

  • Australia will require search engines like Google and Bing to prevent sharing of AI-generated child abuse material.

  • A new code drafted by the companies at the government's request will require removing such content from search results.

  • The code also requires AI functions built into search engines cannot produce synthetic versions of child abuse material.

  • The e-Safety Commissioner said the earlier code did not cover AI-generated content, so asked companies to revise it.

  • The regulator approved the new version of the code drafted by tech companies to reflect developments in generative AI.

reuters.com
Relevant topic timeline:
Generative AI models like ChatGPT pose risks to content and data privacy, as they can scrape and use content without attribution, potentially leading to loss of traffic, revenue, and ethical debates about AI innovation. Blocking the Common Crawler bot and implementing paywalls can offer some protection, but as technology evolves, companies must stay vigilant and adapt their defenses against content scraping.
Several major news outlets, including the New York Times, CNN, Reuters, and the Australian Broadcasting Corporation, have blocked OpenAI's web crawler, GPTBot, which is used to scan webpages and improve their AI models, raising concerns about the use of copyrighted material in AI training.
Salesforce has released an AI Acceptable Use Policy that outlines the restrictions on the use of its generative AI products, including prohibiting their use for weapons development, adult content, profiling based on protected characteristics, medical or legal advice, and more. The policy emphasizes the need for responsible innovation and sets clear ethical guidelines for the use of AI.
Attorneys general from all 50 states have called on Congress to establish protective measures against AI-generated child sexual abuse images and expand existing restrictions on such materials. They argue that the government needs to act quickly to prevent the potentially harmful use of AI technology in creating child exploitation material.
Australia's eSafety Commissioner has introduced an industry code that requires tech giants like Google and Microsoft to eliminate child abuse material from their search results and prevent generative AI from producing deepfake versions of such material.
Paedophiles are using open source AI models to create child sexual abuse material, according to the Internet Watch Foundation, raising concerns about the potential for realistic and widespread illegal content.
A surge in AI-generated child sexual abuse material (CSAM) circulating online has been observed by the Internet Watch Foundation (IWF), raising concerns about the ability to identify and protect real children in need. Efforts are being made by law enforcement and policymakers to address the growing issue of deepfake content created using generative AI platforms, including the introduction of legislation in the US to prevent the use of deceptive AI in elections.
While many experts are concerned about the existential risks posed by AI, Mustafa Suleyman, cofounder of DeepMind, believes that the focus should be on more practical issues like regulation, privacy, bias, and online moderation. He is confident that governments can effectively regulate AI by applying successful frameworks from past technologies, although critics argue that current internet regulations are flawed and insufficiently hold big tech companies accountable. Suleyman emphasizes the importance of limiting AI's ability to improve itself and establishing clear boundaries and oversight to ensure enforceable laws. Several governments, including the European Union and China, are already working on AI regulations.
Microsoft's Bing search engine is receiving several AI improvements, including the integration of OpenAI's DALLE-E 3 model, the ability to provide more personalized answers based on prior chats, and the addition of invisible digital watermarks to AI-generated images for content authenticity. These enhancements aim to enhance user experiences and ensure responsible image generation.
Internet freedom is declining globally due to the use of artificial intelligence (AI) by governments for online censorship and the manipulation of images, audio, and text for disinformation, according to a new report by Freedom House. The report calls for stronger regulation of AI, transparency, and oversight to protect human rights online.