Congress should prioritize maintaining bipartisan commitment to AI, generating global AI guardrails, and seeking out local perspectives in order to develop effective and responsible AI policies.
Artificial intelligence (AI) pioneer Prof Michael Wooldridge is more concerned about AI becoming a monitoring boss, offering constant feedback, and potentially deciding who gets fired, rather than being an existential risk or passing the Turing test. He believes that while AI poses risks, transparency, accountability, and skepticism can help mitigate them. The Christmas lectures from the Royal Institution, which will demystify AI, will be broadcast in late December.
Dr. Michele Leno, a licensed psychologist, discusses the concerns and anxiety surrounding artificial intelligence (AI) and provides advice on how individuals can advocate for themselves by embracing AI while developing skills that can't easily be replaced by technology.
A survey of 213 computer science professors suggests that a new federal agency should be created in the United States to govern artificial intelligence (AI), while the majority of respondents believe that AI will be capable of performing less than 20% of tasks currently done by humans.
Congressman Clay Higgins (R-LA) plans to introduce legislation prohibiting the use of artificial intelligence (AI) by the federal government for law enforcement purposes, in response to the Internal Revenue Service's recently announced AI-driven tax enforcement initiative.
Congress is holding its first-ever meeting on artificial intelligence, with prominent tech leaders like Elon Musk, Mark Zuckerberg, and Bill Gates attending to discuss regulation of the fast-moving technology and its potential risks and benefits.
Mustafa Suleyman, CEO of Inflection.ai and co-founder of DeepMind, believes that artificial intelligence (AI) has the potential to make us all smarter and more productive, rather than making us collectively dumber, and emphasizes the need to maximize the benefits of AI while minimizing its harms. He also discusses the importance of containing AI and the role of governments and commercial pressures in shaping its development. Suleyman views AI as a set of tools that should remain accountable to humans and be used to serve humanity.
Eight big tech companies, including Adobe, IBM, Salesforce, and Nvidia, have pledged to conduct more testing and research on the risks of artificial intelligence (AI) in a meeting with White House officials, signaling a "bridge" to future government action on the issue. These voluntary commitments come amidst congressional scrutiny and ongoing efforts by the White House to develop policies for AI.
California Senator Scott Wiener is introducing a bill to regulate artificial intelligence (AI) in the state, aiming to establish transparency requirements, legal liability, and security measures for advanced AI systems. The bill also proposes setting up a state research cloud called "CalCompute" to support AI development outside of big industry.
Adobe, IBM, Nvidia, and five other companies have endorsed President Joe Biden's voluntary artificial intelligence commitments, including watermarking AI-generated content, as part of an initiative aimed at preventing the misuse of AI's capabilities for harmful purposes.
Assistant Professor Samantha Shorey from the University of Texas Austin has been appointed to the AI100 study panel, which aims to explore the impact of artificial intelligence on society and produce a report every five years. Shorey won an AI100 essay competition with her essay discussing the integration of AI into the workplace and its effects on essential workers.
President Joe Biden addressed the United Nations General Assembly, expressing the need to harness the power of artificial intelligence for good while safeguarding citizens from its potential risks, as U.S. policymakers explore the proper regulations and guardrails for AI technology.
Sen. Mark Warner, a U.S. Senator from Virginia, is urging Congress to take a less ambitious approach to regulating artificial intelligence (AI), suggesting that lawmakers should focus on narrowly focused issues rather than trying to address the full spectrum of AI risks with a single comprehensive law. Warner believes that tackling immediate concerns, such as AI-generated deepfakes, is a more realistic and effective approach to regulation. He also emphasizes the need for bipartisan agreement and action to demonstrate progress in the regulation of AI, especially given Congress's previous failures in addressing issues related to social media.
A coalition of Democrats is urging President Biden to turn non-binding safeguards on artificial intelligence (AI) into policy through an executive order, using the "AI Bill of Rights" as a guide.
President Biden's executive order on artificial intelligence is expected to use the federal government's purchasing power to influence American AI standards, tighten industry guidelines, require cloud computing companies to monitor users developing powerful AI systems, and boost AI talent recruitment and domestic training.
A nonprofit organization funded by Silicon Valley billionaires is financing the salaries of AI fellows in congressional offices, federal agencies, and think tanks, raising concerns of conflicts of interest in shaping AI regulations and diverting attention from more immediate tech issues.
Lawmakers in the US are starting a series of hearings on the role of artificial intelligence (AI), focusing on concerns around data collection and use by AI systems as the industry continues to expand and regulations are considered; experts and witnesses will provide testimony on the subject, including former FTC Chair Jon Leibowitz and actor Clark Gregg.
Former CIA Director and retired Army officer, Gen. David Petraeus, believes that the U.S. is not responsible for keeping its allies on the cutting edge of artificial intelligence (AI) development, unless it is a matter of national security. However, he does advocate for sharing AI advancements with close partners in cases of mutual interest, emphasizing the importance of interoperability.
The Allen Institute for AI is advocating for "radical openness" in artificial intelligence research, aiming to build a freely available AI alternative to tech giants and start-ups, sparking a debate over the risks and benefits of open-source AI models.
Actor Dolph Lundgren believes that artificial intelligence (AI) will be extremely useful, especially in cancer research, citing examples of AI's contribution in finding the COVID-19 vaccine quickly and its potential application in cancer research. Lundgren, who has battled cancer himself, expresses hope for the positive aspects of AI but acknowledges the need for control and responsible use.