1. Home
  2. >
  3. AI 🤖
Posted

Attorneys General from all 50 states urge Congress to help fight AI-generated CSAM

Attorneys general from all 50 states have called on Congress to establish protective measures against AI-generated child sexual abuse images and expand existing restrictions on such materials. They argue that the government needs to act quickly to prevent the potentially harmful use of AI technology in creating child exploitation material.

engadget.com
Relevant topic timeline:
Iowa educators are using artificial intelligence to determine which books should be banned from school libraries in compliance with new state legislation that restricts explicit sexual content, resulting in the removal of 19 books including "The Handmaid's Tale" and "Beloved."
AI-generated inventions need to be allowed patent protection to encourage innovation and maximize social benefits, as current laws hinder progress in biomedicine; jurisdictions around the world have differing approaches to patenting AI-generated inventions, and the US falls behind in this area, highlighting the need for legislative action.
Major media organizations are calling for new laws to protect their content from being used by AI tools without permission, expressing concerns over unauthorized scraping and the potential for AI to produce false or biased information.
Salesforce has released an AI Acceptable Use Policy that outlines the restrictions on the use of its generative AI products, including prohibiting their use for weapons development, adult content, profiling based on protected characteristics, medical or legal advice, and more. The policy emphasizes the need for responsible innovation and sets clear ethical guidelines for the use of AI.
Artificial intelligence should be controlled by humans to prevent its weaponization and ensure safety measures are in place, according to Microsoft's president Brad Smith. He stressed the need for regulations and laws to govern AI, comparing it to other technologies that have required safety breaks and human oversight. Additionally, Smith emphasized that AI is a tool to assist humans, not to replace them, and that it can help individuals think more efficiently.
Artificial intelligence (AI) tools can put human rights at risk, as highlighted by researchers from Amnesty International on the Me, Myself, and AI podcast, who discuss scenarios in which AI is used to track activists and make automated decisions that can lead to discrimination and inequality, emphasizing the need for human intervention and changes in public policy to address these issues.
AI-generated videos are targeting children online, raising concerns about their safety, while there are also worries about AI causing job losses and becoming oppressive bosses; however, AI has the potential to protect critical infrastructure and extend human life.
The authors propose a framework for assessing the potential harm caused by AI systems in order to address concerns about "Killer AI" and ensure responsible integration into society.
The UK government is at risk of contempt of court if it fails to improve its response to requests for transparency about the use of artificial intelligence (AI) in vetting welfare claims, according to the information commissioner. The government has been accused of maintaining secrecy over the use of AI algorithms to detect fraud and error in universal credit claims, and it has refused freedom of information requests and blocked MPs' questions on the matter. Child poverty campaigners have expressed concerns about the potential devastating impact on children if benefits are suspended.
The use of AI in the entertainment industry, such as body scans and generative AI systems, raises concerns about workers' rights, intellectual property, and the potential for broader use of AI in other industries, infringing on human connection and privacy.
The top prosecutors in all 50 states are urging Congress to establish an expert commission to study and legislate against the use of artificial intelligence to exploit children through pornography.
The infiltration of artificial intelligence into children's lives is causing anxiety and sparking fears about the perversion of children's culture, as AI tools create unsettling and twisted representations of childhood innocence. This trend continues a long history of cultural anxieties about dangerous interactions between children and technology, with films like M3GAN and Frankenstein depicting the dangers of AI. While there is a need to address children's use and understanding of AI, it is important not to succumb to moral panics and instead focus on promoting responsible AI use and protecting children's rights.
Attorneys general from all 50 states and four territories are urging Congress to establish an expert commission to study the potential exploitation of children through generative AI and to expand laws against child sexual abuse material (CSAM) to cover AI-generated materials.
Australia's eSafety Commissioner has introduced an industry code that requires tech giants like Google and Microsoft to eliminate child abuse material from their search results and prevent generative AI from producing deepfake versions of such material.
The lack of regulation surrounding artificial intelligence in healthcare is a significant threat, according to the World Health Organization's European regional director, who highlights the need for positive regulation to prevent harm while harnessing AI's potential.
Australia's internet regulator has drafted a new code that requires search engines like Google and Bing to prevent the sharing of child sexual abuse material created by artificial intelligence, and also prohibits the AI functions of search engines from producing deepfake content.
State attorneys general, including Oklahoma's Attorney General Gentner Drummond, are urging Congress to address the consequences of artificial intelligence on child pornography, expressing concern that AI-powered tools are making prosecution more challenging and creating new opportunities for abuse.
Congressman Clay Higgins (R-LA) plans to introduce legislation prohibiting the use of artificial intelligence (AI) by the federal government for law enforcement purposes, in response to the Internal Revenue Service's recently announced AI-driven tax enforcement initiative.
Eight technology companies, including Salesforce and Nvidia, have joined the White House's voluntary artificial intelligence pledge, which aims to mitigate the risks of AI and includes commitments to develop technology for identifying AI-generated images and sharing safety data with the government and academia.
Paedophiles are using open source AI models to create child sexual abuse material, according to the Internet Watch Foundation, raising concerns about the potential for realistic and widespread illegal content.
A surge in AI-generated child sexual abuse material (CSAM) circulating online has been observed by the Internet Watch Foundation (IWF), raising concerns about the ability to identify and protect real children in need. Efforts are being made by law enforcement and policymakers to address the growing issue of deepfake content created using generative AI platforms, including the introduction of legislation in the US to prevent the use of deceptive AI in elections.
The Department of Homeland Security (DHS) has released new guidelines for the use of artificial intelligence (AI), including a policy that prohibits the collection and dissemination of data used in AI activities and a requirement for thorough testing of facial recognition technologies to ensure there is no unintended bias.
The Subcommittee on Cybersecurity, Information Technology, and Government Innovation discussed the federal government's use of artificial intelligence (AI) and emphasized the need for responsible governance, oversight, and accountability to mitigate risks and protect civil liberties and privacy rights.
Adversaries and criminal groups are exploiting artificial intelligence (AI) technology to carry out malicious activities, according to FBI Director Christopher Wray, who warned that while AI can automate tasks for law-abiding citizens, it also enables the creation of deepfakes and malicious code, posing a threat to US citizens. The FBI is working to identify and track those misusing AI, but is cautious about using it themselves. Other US security agencies, however, are already utilizing AI to combat various threats, while concerns about China's use of AI for misinformation and propaganda are growing.
The U.S. government must establish regulations and enforce standards to ensure the safety and security of artificial intelligence (AI) development, including requiring developers to demonstrate the safety of their systems before deployment, according to Anthony Aguirre, the executive director and secretary of the board at the Future of Life Institute.