The U.S. is falling behind in regulating artificial intelligence (AI), while Europe has passed the world's first comprehensive AI law; President Joe Biden recently met with industry leaders to discuss the need for AI regulation and companies pledged to develop safeguards for AI-generated content and prioritize user privacy.
The use of AI algorithms by insurance companies to assess claims is raising concerns about potential bias and lack of human oversight, leading Pennsylvania legislators to propose legislation that would regulate the use of AI in claims processing.
Artificial intelligence (AI) poses risks in the legal industry, including ethical dilemmas, reputational damage, and discrimination, according to legal technology experts. Instances of AI-generated content without proper human oversight could compromise the quality of legal representation and raise concerns about professional responsibility. Additionally, the Equal Employment Opportunity Commission (EEOC) recently settled a lawsuit involving discriminatory use of AI in the workplace, highlighting the potential for AI to discriminate. Maintaining trust and credibility is crucial in the reputation-reliant field of law, and disseminating AI-generated content without scrutiny may lead to reputational damage and legal consequences for lawyers or law firms. Other legal cases involving AI include allegations of copyright infringement.
A global survey by Salesforce indicates that consumers have a growing distrust of firms using AI, with concerns about unethical use of the technology, while an Australian survey found that most people believe AI creates more problems than it solves.
Artificial intelligence (AI) tools can put human rights at risk, as highlighted by researchers from Amnesty International on the Me, Myself, and AI podcast, who discuss scenarios in which AI is used to track activists and make automated decisions that can lead to discrimination and inequality, emphasizing the need for human intervention and changes in public policy to address these issues.
The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
A taskforce established by the UK Trade Union Congress (TUC) aims to develop legislation to protect workers from the negative impacts of artificial intelligence (AI) in the workplace, focusing on issues such as privacy infringement and potential discrimination. The TUC taskforce plans to produce a draft law next spring, with the support of both Labour and Conservative officials, aimed at ensuring fair and just application of AI technologies.
The digital transformation driven by artificial intelligence (AI) and machine learning will have a significant impact on various sectors, including healthcare, cybersecurity, and communications, and has the potential to alter how we live and work in the future. However, ethical concerns and responsible oversight are necessary to ensure the positive and balanced development of AI technology.
Attorneys general from all 50 states have called on Congress to establish protective measures against AI-generated child sexual abuse images and expand existing restrictions on such materials. They argue that the government needs to act quickly to prevent the potentially harmful use of AI technology in creating child exploitation material.
The Supreme Court's "major questions doctrine" could hinder the regulation of artificial intelligence (AI) by expert agencies, potentially freezing investments and depriving funding from AI platforms that adhere to higher standards, creating uncertainty and hindering progress in the field.
The lack of regulation surrounding artificial intelligence in healthcare is a significant threat, according to the World Health Organization's European regional director, who highlights the need for positive regulation to prevent harm while harnessing AI's potential.
Concerns about artificial intelligence and democracy are assessed, with fears over AI's potential to undermine democracy explored, including the threat posed by Chinese misinformation campaigns and the call for AI regulation by Senator Josh Hawley.
State attorneys general, including Oklahoma's Attorney General Gentner Drummond, are urging Congress to address the consequences of artificial intelligence on child pornography, expressing concern that AI-powered tools are making prosecution more challenging and creating new opportunities for abuse.
The Internal Revenue Service (IRS) is using artificial intelligence to investigate tax evasion at large partnerships, targeting wealthy Americans and corporations, to crack down on tax cheats and address complex cases that have overwhelmed the agency; however, the IRS's use of AI has faced criticism and concerns over trust and equitable enforcement practices.
Congressman Clay Higgins (R-LA) plans to introduce legislation prohibiting the use of artificial intelligence (AI) by the federal government for law enforcement purposes, in response to the Internal Revenue Service's recently announced AI-driven tax enforcement initiative.
Government agencies at the state and city levels in the United States are exploring the use of generative artificial intelligence (AI) to streamline bureaucratic processes, but they also face unique challenges related to transparency and accountability, such as ensuring accuracy, protecting sensitive information, and avoiding the spread of misinformation. Policies and guidelines are being developed to regulate the use of generative AI in government work, with a focus on disclosure, fact checking, and human review of AI-generated content.
Eight big tech companies, including Adobe, IBM, Salesforce, and Nvidia, have pledged to conduct more testing and research on the risks of artificial intelligence (AI) in a meeting with White House officials, signaling a "bridge" to future government action on the issue. These voluntary commitments come amidst congressional scrutiny and ongoing efforts by the White House to develop policies for AI.
New initiatives and regulators are taking action against false information online, just as artificial intelligence poses a greater threat to the problem.
Google will require political advertisements that use artificial intelligence to disclose the use of AI-generated content, in order to prevent misleading and predatory campaign ads.
Artificial Intelligence poses real threats due to its newness and rawness, such as ethical challenges, regulatory and legal challenges, bias and fairness issues, lack of transparency, privacy concerns, safety and security risks, energy consumption, data privacy and ownership, job loss or displacement, explainability problems, and managing hype and expectations.
The UK government is showing increased concern about the potential risks of artificial intelligence (AI) and the influence of the "Effective Altruism" (EA) movement, which warns of the existential dangers of super-intelligent AI and advocates for long-term policy planning; critics argue that the focus on future risks distracts from the real ethical challenges of AI in the present and raises concerns of regulatory capture by vested interests.
The UK's competition watchdog has warned against assuming a positive outcome from the boom in artificial intelligence, citing risks such as false information, fraud, and high prices, as well as the domination of the market by a few players. The watchdog emphasized the potential for negative consequences if AI development undermines consumer trust or concentrates power in the hands of a few companies.
The Subcommittee on Cybersecurity, Information Technology, and Government Innovation discussed the federal government's use of artificial intelligence (AI) and emphasized the need for responsible governance, oversight, and accountability to mitigate risks and protect civil liberties and privacy rights.
A new poll reveals that 63% of American voters believe regulation should actively prevent the development of superintelligent AI, challenging the assumption that artificial general intelligence (AGI) should exist. The public is increasingly questioning the potential risks and costs associated with AGI, highlighting the need for democratic input and oversight in the development of transformative technologies.
While many experts are concerned about the existential risks posed by AI, Mustafa Suleyman, cofounder of DeepMind, believes that the focus should be on more practical issues like regulation, privacy, bias, and online moderation. He is confident that governments can effectively regulate AI by applying successful frameworks from past technologies, although critics argue that current internet regulations are flawed and insufficiently hold big tech companies accountable. Suleyman emphasizes the importance of limiting AI's ability to improve itself and establishing clear boundaries and oversight to ensure enforceable laws. Several governments, including the European Union and China, are already working on AI regulations.
The UK's deputy prime minister, Oliver Dowden, will use a speech at the UN general assembly to warn that artificial intelligence is developing too fast for regulation, and will call on other countries to collaborate in creating an international regulatory system to address the potential threats posed by AI technology.