The article discusses Google's recent keynote at Google I/O and its focus on AI. It highlights the poor presentation and lack of new content during the event. The author reflects on Google's previous success in AI and its potential to excel in this field. The article also explores the concept of AI as a sustaining innovation for big tech companies and the challenges they may face. It discusses the potential impact of AI regulations in the EU and the role of open source models in the AI landscape. The author concludes by suggesting that the battle between centralized models and open source AI may be the defining war of the digital era.
Mustafa Suleyman, co-founder of DeepMind and InflectionAI, explains in his new book, "The Coming Wave," why our systems are ill-equipped to handle the advancements in technology, including the potential threats posed by AI and the need for better assessment methods.
Google DeepMind has commissioned 13 artists to create diverse and accessible art and imagery that aims to change the public’s perception of AI, countering the unrealistic and misleading stereotypes often used to represent the technology. The artwork visualizes key themes related to AI, such as artificial general intelligence, chip design, digital biology, large image models, language models, and the synergy between neuroscience and AI, and it is openly available for download.
Artificial intelligence, particularly generative AI, is being embraced by the computer graphics and visual effects community at the 50th SIGGRAPH conference, with a focus on responsible and ethical AI, despite concerns about the technology's impact on Hollywood and the creative process.
Artificial intelligence (AI) is seen as a tool that can inspire and collaborate with human creatives in the movie and TV industry, but concerns remain about copyright and ethical issues, according to Greg Harrison, chief creative officer at MOCEAN. Although AI has potential for visual brainstorming and automation of non-creative tasks, it should be used cautiously and in a way that values human creativity and culture.
U.S. Senate Majority Leader Chuck Schumer will host a closed-door artificial intelligence forum on September 13, featuring tech leaders such as Elon Musk, Mark Zuckerberg, and Sundar Pichai, to lay down a new foundation for AI policy.
Google has announced a new tool, called SynthID, which embeds a digital "watermark" into AI-generated images, making it harder to spread fake images and disinformation.
The AI Stage agenda at TechCrunch Disrupt 2023 features discussions on topics such as AI valuations, ethical AI, AI in the cloud, AI-generated disinformation, robotics and self-driving cars, AI in movies and games, generative text AI, and real-world case studies of AI-powered industries.
Senate Majority Leader Chuck Schumer's upcoming AI summit in Washington D.C. will include key figures from Hollywood and Silicon Valley, indicating the growing threat that AI poses to the entertainment industry and the ongoing strikes in Hollywood. The event aims to establish a framework for regulating AI, but forming legislation will take time and involve multiple forums.
The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
Mustafa Suleyman, co-founder of Google's DeepMind, predicts that within the next five years, everyone will have their own AI-powered personal assistants that intimately know their personal information and boost productivity.
Senators Richard Blumenthal and Josh Hawley are holding a hearing to discuss legislation on regulating artificial intelligence (AI), with a focus on protecting against potential dangers posed by AI and improving transparency and public trust in AI companies. The bipartisan legislation framework includes creating an independent oversight body, clarifying legal liability for AI harms, and requiring companies to disclose when users are interacting with AI models or systems. The hearing comes ahead of a major AI Insight Forum, where top tech executives will provide insights to all 100 senators.
Tech CEOs Elon Musk and Mark Zuckerberg will be participating in Sen. Majority Leader Chuck Schumer's first AI Insight Forum, where lawmakers will have the opportunity to hear from them about artificial intelligence.
AI tools from OpenAI, Microsoft, and Google are being integrated into productivity platforms like Microsoft Teams and Google Workspace, offering a wide range of AI-powered features for tasks such as text generation, image generation, and data analysis, although concerns remain regarding accuracy and cost-effectiveness.
Google announced its upcoming AI system, Gemini, at the Google I/O developer conference, which combines the strengths of DeepMind's AlphaGo with extensive language modeling capabilities and aims to outperform AI systems like ChatGPT from OpenAI, potentially revolutionizing interactive AI.
An AI program called AlphaMissense developed by Google DeepMind can predict whether missense mutations in DNA are harmless or likely to cause disease, aiding in research and medical diagnosis of rare disorders.
A new poll reveals that 63% of American voters believe regulation should actively prevent the development of superintelligent AI, challenging the assumption that artificial general intelligence (AGI) should exist. The public is increasingly questioning the potential risks and costs associated with AGI, highlighting the need for democratic input and oversight in the development of transformative technologies.
President Joe Biden addressed the United Nations General Assembly, expressing the need to harness the power of artificial intelligence for good while safeguarding citizens from its potential risks, as U.S. policymakers explore the proper regulations and guardrails for AI technology.
While many experts are concerned about the existential risks posed by AI, Mustafa Suleyman, cofounder of DeepMind, believes that the focus should be on more practical issues like regulation, privacy, bias, and online moderation. He is confident that governments can effectively regulate AI by applying successful frameworks from past technologies, although critics argue that current internet regulations are flawed and insufficiently hold big tech companies accountable. Suleyman emphasizes the importance of limiting AI's ability to improve itself and establishing clear boundaries and oversight to ensure enforceable laws. Several governments, including the European Union and China, are already working on AI regulations.
Open source and artificial intelligence have a deep connection, as open-source projects and tools have played a crucial role in the development of modern AI, including popular AI generative models like ChatGPT and Llama 2.