1. Home
  2. >
  3. Business 💼
Posted

More Tech Companies Sign White House AI Principles Pledge

  • Eight more tech companies, including IBM, Palantir, and Stability, have signed the White House's voluntary AI principles pledge.

  • Fifteen major US companies have now signed the pledge, which includes promises like developing tech to ID AI images and sharing safety data.

  • The new signees signal the pledge is expanding beyond just big AI companies like Microsoft and Meta.

  • The White House is still very focused on AI issues, with an executive order in the works and support for Congressional AI legislation.

  • Adobe calls the pledge an important collaboration between government and industry on thoughtful AI regulation.

washingtonpost.com
Relevant topic timeline:
### Summary British Prime Minister Rishi Sunak is allocating $130 million to purchase computer chips to power artificial intelligence and build an "AI Research Resource" in the United Kingdom. ### Facts - 🧪 The United Kingdom plans to establish an "AI Research Resource" by mid-2024 to become an AI tech hub. - 💻 The government is sourcing chips from NVIDIA, Intel, and AMD and has ordered 5,000 NVIDIA graphic processing units (GPUs). - 💰 The allocated $130 million may not be sufficient to match the ambition of the AI hub, leading to a potential request for more funding. - 🌍 A recent report highlighted that many companies face challenges deploying AI due to limited resources and technical obstacles. - 👥 In a survey conducted by S&P Global, firms reported insufficient computing power as a major obstacle to supporting AI projects. - 🤖 The ability to support AI workloads will play a crucial role in determining who leads in the AI space.
### Summary Arati Prabhakar, President Biden's science adviser, is helping guide the U.S. approach to safeguarding AI technology and has been in conversation with Biden about artificial intelligence. ### Facts - 🗣️ Prabhakar has had multiple conversations with President Biden about artificial intelligence, focusing on understanding its implications and taking action. - ⚖️ Prabhakar acknowledges that making AI models explainable is difficult due to their opaque and black box nature but believes it is possible to ensure their safety and effectiveness by learning from the journey of pharmaceuticals. - 😟 Prabhakar is concerned about the misuse of AI, such as chatbots being manipulated to provide instructions on building weapons and the bias and privacy issues associated with facial recognition systems. - 💼 Seven major tech companies, including Google, Microsoft, and OpenAI, have agreed to meet voluntary AI safety standards set by the White House, but Prabhakar emphasizes the need for government involvement and accountability measures. - 📅 There is no specific timeline provided, but Prabhakar states that President Biden considers AI an urgent issue and expects actions to be taken quickly.
### Summary President Joe Biden consults with Arati Prabhakar, his science adviser, on matters related to artificial intelligence (AI). Prabhakar is working with major tech companies like Amazon, Google, Microsoft, and Meta to shape the U.S. approach to safeguarding AI technology. ### Facts - 🤖 Prabhakar has had several discussions with President Biden on artificial intelligence. - 📚 Making AI models explainable is a priority for Senate Majority Leader Chuck Schumer, but it is technically challenging. - 💡 Prabhakar believes that despite the opacity of deep-learning AI systems, we can learn enough about their safety and effectiveness to leverage their value. - ⚠️ Concerns include chatbots being coerced into providing instructions for building weapons, biases in AI systems trained on human data, wrongful arrests from facial recognition systems, and privacy issues. - 💼 Seven companies, including Google, Microsoft, and OpenAI, voluntarily committed to AI safety standards, but more companies need to step up, and government action is necessary. - ⏰ Timeline for future actions is fast, according to Prabhakar, as President Biden has made it clear that AI is an urgent issue.
### Summary President Joe Biden turns to his science adviser, Arati Prabhakar, for guidance on artificial intelligence (AI) and relies on cooperation from big tech firms. Prabhakar emphasizes the importance of understanding the consequences and implications of AI while taking action. ### Facts - Prabhakar has had several conversations with President Biden about AI, which are exploratory and action-oriented. - Despite the opacity of deep-learning, machine-learning systems, Prabhakar believes that like pharmaceuticals, there are ways to ensure the safety and effectiveness of AI systems. - Concerns regarding AI applications include the ability to coax chatbots into providing instructions for building weapons, biases in trained systems, wrongful arrests related to facial recognition, and privacy concerns. - Several tech companies, including Google, Microsoft, and OpenAI, have committed to meeting voluntary AI safety standards set by the White House, but there is still friction due to market constraints. - Future actions, including a potential Biden executive order, are under consideration with a focus on fast implementation and enforceable accountability measures. 🔬 Prabhakar advises President Biden on AI and encourages action and understanding. 🛡️ Prabhakar believes that despite their opacity, AI systems can be made safe and effective, resembling the journey of pharmaceuticals. ⚠️ Concerns regarding AI include weapon-building instructions, biases in trained systems, wrongful arrests, and privacy issues. 🤝 Tech companies have committed to voluntary AI safety standards but face market constraints. ⏰ Future actions, including potential executive orders, are being considered with an emphasis on prompt implementation and enforceable accountability measures.
Nvidia has established itself as a dominant force in the artificial intelligence industry by offering a comprehensive range of A.I. development solutions, from chips to software, and maintaining a large community of A.I. programmers who consistently utilize the company's technology.
President Joe Biden relies on his science adviser Arati Prabhakar to guide the US approach to safeguarding AI technology, with cooperation from tech giants like Amazon, Google, Microsoft and Meta. Prabhakar discusses the need for understanding the implications and consequences of AI, the challenge of making AI models explainable, concerns about biases and privacy, and the importance of voluntary commitments from tech companies along with government actions.
Salesforce has released an AI Acceptable Use Policy that outlines the restrictions on the use of its generative AI products, including prohibiting their use for weapons development, adult content, profiling based on protected characteristics, medical or legal advice, and more. The policy emphasizes the need for responsible innovation and sets clear ethical guidelines for the use of AI.
Artificial intelligence (AI) leaders Palantir Technologies and Nvidia are poised to deliver substantial rewards to their shareholders as businesses increasingly seek to integrate AI technologies into their operations, with Palantir's advanced machine-learning technology and customer growth, as well as Nvidia's dominance in the AI chip market, positioning both companies for success.
The UK government has been urged to introduce new legislation to regulate artificial intelligence (AI) in order to keep up with the European Union (EU) and the United States, as the EU advances with the AI Act and US policymakers publish frameworks for AI regulations. The government's current regulatory approach risks lagging behind the fast pace of AI development, according to a report by the science, innovation, and technology committee. The report highlights 12 governance challenges, including bias in AI systems and the production of deepfake material, that need to be addressed in order to guide the upcoming global AI safety summit at Bletchley Park.
Several tech giants in the US, including Alphabet, Microsoft, Meta Platforms, and Amazon, have pledged to collaborate with the Biden administration to address the risks associated with artificial intelligence, focusing on safety, security, and trust in AI development.
Nvidia's processors could be used as a leverage for the US to impose its regulations on AI globally, according to Mustafa Suleyman, co-founder of DeepMind and Inflection AI. However, Washington is lagging behind Europe and China in terms of AI regulation.
A survey of 213 computer science professors suggests that a new federal agency should be created in the United States to govern artificial intelligence (AI), while the majority of respondents believe that AI will be capable of performing less than 20% of tasks currently done by humans.
An AI-generated COVID drug enters clinical trials, GM and Google strengthen their AI partnership, and Israel unveils an advanced AI-powered surveillance plane, among other AI technology advancements.
AI is being discussed by CEOs behind closed doors as a solution to various challenges, including cybersecurity, shopping efficiency, and video conferencing.
The G20 member nations have pledged to use artificial intelligence (AI) in a responsible manner, addressing concerns such as data protection, biases, human oversight, and ethics, while also planning for the future of cryptocurrencies and central bank digital currencies (CBDCs).
Countries around the world, including Australia, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the UK, the UN, and the US, are taking various steps to regulate artificial intelligence (AI) technologies and address concerns related to privacy, security, competition, and governance.
Adobe, IBM, Nvidia, and five other firms have signed President Joe Biden's voluntary commitments regarding artificial intelligence, which include steps like watermarking AI-generated content, in an effort to prevent the misuse of AI's power.
Eight big tech companies, including Adobe, IBM, Salesforce, and Nvidia, have pledged to conduct more testing and research on the risks of artificial intelligence (AI) in a meeting with White House officials, signaling a "bridge" to future government action on the issue. These voluntary commitments come amidst congressional scrutiny and ongoing efforts by the White House to develop policies for AI.
Artificial intelligence (AI) is poised to be the biggest technological shift of our lifetimes, and companies like Nvidia, Amazon, Alphabet, Microsoft, and Tesla are well-positioned to capitalize on this AI revolution.
Eight additional U.S.-based AI developers, including NVIDIA, Scale AI, and Cohere, have pledged to develop generative AI tools responsibly, joining a growing list of companies committed to the safe and trustworthy deployment of AI.
The Biden-Harris Administration has secured commitments from eight leading AI companies, including Adobe, IBM, and Salesforce, to advance the development of safe, secure, and trustworthy AI and bridge the gap to government action, emphasizing principles of safety, security, and trust.
NVIDIA has announced its support for voluntary commitments developed by the Biden Administration to ensure the safety, security, and trustworthiness of advanced AI systems, while its chief scientist, Bill Dally, testified before a U.S. Senate subcommittee on potential legislation covering generative AI.
The U.S. Department of Homeland Security is set to announce new limits on its use of artificial intelligence (AI) technology, aiming to ensure responsible and effective use while safeguarding privacy, civil rights, and civil liberties. The agency plans to adopt AI in various missions, including border control and supply chain security, but acknowledges the potential for unintended harm and the need for transparency. The new policy will allow Americans to decline the use of facial recognition technology and require manual review of AI-generated facial recognition matches for accuracy.
The Subcommittee on Cybersecurity, Information Technology, and Government Innovation discussed the federal government's use of artificial intelligence (AI) and emphasized the need for responsible governance, oversight, and accountability to mitigate risks and protect civil liberties and privacy rights.
Governments worldwide are grappling with the challenge of regulating artificial intelligence (AI) technologies, as countries like Australia, Britain, China, the European Union, France, G7 nations, Ireland, Israel, Italy, Japan, Spain, the United Nations, and the United States take steps to establish regulations and guidelines for AI usage.
A new poll reveals that 63% of American voters believe regulation should actively prevent the development of superintelligent AI, challenging the assumption that artificial general intelligence (AGI) should exist. The public is increasingly questioning the potential risks and costs associated with AGI, highlighting the need for democratic input and oversight in the development of transformative technologies.
President Joe Biden addressed the United Nations General Assembly, expressing the need to harness the power of artificial intelligence for good while safeguarding citizens from its potential risks, as U.S. policymakers explore the proper regulations and guardrails for AI technology.
Amazon will require publishers who use AI-generated content to disclose their use of the technology, small businesses are set to benefit from AI and cloud technologies, and President Biden warns the UN about the potential risks of AI governance, according to the latest AI technology advancements reported by Fox News.
While many experts are concerned about the existential risks posed by AI, Mustafa Suleyman, cofounder of DeepMind, believes that the focus should be on more practical issues like regulation, privacy, bias, and online moderation. He is confident that governments can effectively regulate AI by applying successful frameworks from past technologies, although critics argue that current internet regulations are flawed and insufficiently hold big tech companies accountable. Suleyman emphasizes the importance of limiting AI's ability to improve itself and establishing clear boundaries and oversight to ensure enforceable laws. Several governments, including the European Union and China, are already working on AI regulations.
The U.S. government must establish regulations and enforce standards to ensure the safety and security of artificial intelligence (AI) development, including requiring developers to demonstrate the safety of their systems before deployment, according to Anthony Aguirre, the executive director and secretary of the board at the Future of Life Institute.
Summary: To ensure ethical and responsible adoption of AI technology, organizations should establish an AI ethics advisor, stay updated on regulations, invest in AI training, and collaborate with an AI consortium.
Nvidia and Microsoft are two companies that have strong long-term growth potential due to their involvement in the artificial intelligence (AI) market, with Nvidia's GPUs being in high demand for AI processing and Microsoft's investment in OpenAI giving it access to AI technologies. Both companies are well-positioned to benefit from the increasing demand for AI infrastructure in the coming years.
The United Nations aims to bring inclusiveness, legitimacy, and authority to the regulation of artificial intelligence, leveraging its experience with managing the impact of various technologies and creating compliance pressure for commitments made by governments, according to Amandeep Gill, the organization's top tech-policy official. Despite the challenges of building consensus and engaging stakeholders, the U.N. seeks to promote diverse and inclusive innovation to ensure equal opportunities and prevent concentration of economic power. Gill also emphasizes the potential of AI in accelerating progress towards the Sustainable Development Goals but expresses concerns about potential misuse and concentration of power.
The hype around artificial intelligence (AI) may be overdone, as traffic declines for AI chatbots and rumors circulate about Microsoft cutting orders for AI chips, suggesting that widespread adoption of AI may take more time. Despite this, there is still demand for AI infrastructure, as evidenced by Nvidia's significant revenue growth. Investors should resist the hype, diversify, consider valuations, and be patient when investing in the AI sector.
Large companies are expected to pursue strategic mergers and acquisitions in the field of artificial intelligence (AI) to enhance their capabilities, with potential deals including Microsoft acquiring Hugging Face, Meta acquiring Character.ai, Snowflake acquiring Pinecone, Nvidia acquiring CoreWeave, Intel acquiring Modular, Adobe acquiring Runway, Amazon acquiring Anthropic, Eli Lilly acquiring Inceptive, Salesforce acquiring Gong, and Apple acquiring Inflection AI.
Eight more AI companies have committed to following security safeguards voluntarily, bringing the total number of companies committed to responsible AI to thirteen, including big names such as Amazon, Google, Microsoft, and Adobe.
AI leaders including Alphabet CEO Sundar Pichai, Microsoft president Brad Smith, and OpenAI's Sam Altman are supporting AI regulation to ensure investment security, unified rules, and a role in shaping legislation, as regulations also benefit consumers by ensuring safety, cracking down on scams and discrimination, and eliminating bias.
The birth of the PC, Internet, and now mainstream artificial intelligence (AI) has ushered us into uncharted territories, requiring collaboration, shared principles, security, and sustainability to unlock AI's true value ethically and for the benefit of all.
China's use of artificial intelligence (AI) for surveillance and oppression should deter the United States from collaborating with China on AI development and instead focus on asserting itself in international standards-setting bodies, open sourcing AI technologies, and promoting explainable AI to ensure transparency and uphold democratic values.
Artificial intelligence (AI) will surpass human intelligence and could manipulate people, according to AI pioneer Geoffrey Hinton, who quit his role at Google to raise awareness about the risks of AI and advocate for regulations. Hinton also expressed concerns about AI's impact on the labor market and its potential militaristic uses, and called for governments to commit to not building battlefield robots. Global efforts are underway to regulate AI, with the U.K. hosting a global AI summit and the U.S. crafting an AI Bill of Rights.
U.K. startup Yepic AI, which claims to use "deepfakes for good," violated its own ethics policy by creating and sharing deepfaked videos of a TechCrunch reporter without their consent. They have now stated that they will update their ethics policy.
Nvidia has established itself as the main beneficiary of the artificial intelligence gold rush, but other companies involved in data-center infrastructure and cloud services are also expected to benefit.