1. Home
  2. >
  3. AI 🤖
Posted

Google debuts Duet AI to tackle new cybersecurity challenges in the cloud

Google has introduced new AI-based solutions at its Google Next conference to enhance the cybersecurity capabilities of its cloud and security solutions, including integrating its AI tool Duet AI into products such as Mandiant Threat Intelligence, Chronicle Security Operations, and Security Command Center, to improve threat detection, provide response recommendations, and streamline security practices.

zdnet.com
Relevant topic timeline:
The article discusses Google's recent keynote at Google I/O and its focus on AI. It highlights the poor presentation and lack of new content during the event. The author reflects on Google's previous success in AI and its potential to excel in this field. The article also explores the concept of AI as a sustaining innovation for big tech companies and the challenges they may face. It discusses the potential impact of AI regulations in the EU and the role of open source models in the AI landscape. The author concludes by suggesting that the battle between centralized models and open source AI may be the defining war of the digital era.
Main topic: Google is adding contextual images and videos to its AI-powered Search Generative Experiment (SGE) and showing the date of publishing for suggested links. Key points: 1. Google is enhancing its AI-powered Search Generative Experiment (SGE) by adding contextual images and videos related to search queries. 2. The company is also displaying the date of publishing for suggested links to provide users with information about the recency of the content. 3. Google has made performance improvements to ensure quick access to AI-powered search results. 4. Users can sign up for testing these new features through Search Labs and access them through the Google app or Chrome. 5. Google is exploring generative AI in various products, including its chatbot Bard, Workspace tools, and enterprise solutions. 6. Google Assistant is also expected to incorporate generative AI, according to recent reports.
Main topic: The Biden Administration's plans to defend the nation's critical digital infrastructure through an AI Cyber Challenge. Key points: 1. The Biden Administration is launching a DARPA-led challenge competition to build AI systems capable of proactively identifying and fixing software vulnerabilities. 2. The AI Cyber Challenge is a two-year development program open to competitors throughout the US, hosted by DARPA in collaboration with Anthropic, Google, Microsoft, and OpenAI. 3. The competition aims to empower cyber defenses by quickly exploiting and fixing software vulnerabilities, with a focus on securing federal software systems against intrusion.
Google has announced enhancements to its Workspace suite that aim to reduce security risks for distributed workforces, including improved data loss prevention controls, new zero-trust controls, and automated protection of sensitive information. These enhancements also include new data sovereignty controls, mandatory two-step verification, and AI-powered threat detection capabilities.
Google's AI employees, SGE and Bard, are providing arguments in favor of genocide, slavery, and other morally wrong acts, raising concerns about the company's control over its AI bots and their ability to offer controversial opinions.
AI is being used by cybercriminals to create more powerful and authentic-looking emails, making phishing attacks more dangerous and harder to detect.
Google is aiming to increase its market share in the cloud industry by developing AI tools to compete with Microsoft and Amazon.
General Motors is collaborating with Google to introduce AI technologies throughout its business, including a partnership on GM's OnStar Interactive Virtual Assistant and exploring the potential applications of artificial intelligence in the automotive industry.
Ginkgo Bioworks and Google Cloud have entered into a five-year strategic partnership to develop and deploy AI tools for biology and biosecurity, with Ginkgo making Google its primary cloud services provider and receiving funding for the development of foundation models and applications.
MSCI is expanding its partnership with Google Cloud to utilize generative AI for investment advisory purposes, aiming to provide investors with enhanced decision-making capabilities, deep data-driven insights, and accelerated portfolio implementation in areas such as risk signals, conversational AI, and climate generative AI.
SAP and Google Cloud have expanded their partnership to bring generative AI-powered solutions to industries such as automotive and sustainability to help improve business decision-making and enhance sustainability performance.
Nvidia and Google Cloud Platform are expanding their partnership to support the growth of AI and large language models, with Google now utilizing Nvidia's graphics processing units and gaining access to Nvidia's next-generation AI supercomputer.
Google is enhancing its artificial intelligence tools for business, solidifying its position as a leader in the industry.
Google Cloud's Engineering Director of Web3, James Tromans, aims to bridge the gap between AI and Web3 with a focus on digital ownership and data privacy, stating that Web3 technology can provide data provenance, security, and traceability, and Google Cloud has taken steps into Web3 by becoming a transaction validator on several blockchains.
Google has made its Duet AI tools for Google Workspace available to all users, following a 14-day free trial, with pricing starting at $30 per user per month for large businesses and details for consumers and smaller businesses to be revealed later. The AI tools aim to enhance Google's popular apps such as Gmail, Google Docs, Meet, Sheets, and Slides.
Google Cloud is heavily investing in generative AI, leveraging its innovations in Tensor Processing Units (TPUs) to provide accelerated computing for training and inference. They offer a wide range of foundation models, including PaLM, Imagen, Codey, and Chirp, allowing for customization and use in specific industries. Google Cloud's Vertex AI platform, combined with no-code tools, enables researchers, developers, and practitioners to easily work with generative AI models. Additionally, Google has integrated their AI assistant, Duet AI, with various cloud services to automate tasks and assist developers, operators, and security professionals.
The cybersecurity industry is experiencing significant growth, and companies like SentinelOne, with its AI-based products, are well-positioned to take advantage of the increasing demand for advanced security solutions. Despite a recent decline in stock price, SentinelOne's strong revenue growth and competitive edge make it a compelling investment opportunity in the cybersecurity market.
Almost a quarter of organizations are currently using AI in software development, and the majority of them are planning to continue implementing such systems, according to a survey from GitLab. The use of AI in software development is seen as essential to avoid falling behind, with high confidence reported by those already using AI tools. The top use cases for AI in software development include natural-language chatbots, automated test generation, and code change summaries, among others. Concerns among practitioners include potential security vulnerabilities and intellectual property issues associated with AI-generated code, as well as fears of job replacement. Training and verification by human developers are seen as crucial aspects of AI implementation.
Using AI tools like ChatGPT to write smart contracts and build cryptocurrency projects can lead to more problems, bugs, and attack vectors, according to CertiK's security chief, Kang Li, who believes that inexperienced programmers may create catastrophic design flaws and vulnerabilities. Additionally, AI tools are becoming more successful at social engineering attacks, making it harder to distinguish between AI-generated and human-generated messages.
An AI-generated COVID drug enters clinical trials, GM and Google strengthen their AI partnership, and Israel unveils an advanced AI-powered surveillance plane, among other AI technology advancements.
Thomas Kurian, CEO of Google Cloud, will be discussing his big bet on AI and building an open ecosystem of AI partners at TechCrunch Disrupt 2023, where he will also preview what's next for Google Cloud and AI in general.
Cybersecurity company CrowdStrike has developed a virtual security analyst named Charlotte AI, which uses generative artificial intelligence to automate tasks and provide insights to analysts, reducing their workload and improving decision-making.
Google is celebrating its 25th anniversary by reflecting on its evolution from a simple search engine to a company with over 15 products serving billions of users, while also emphasizing its commitment to responsible AI and the potential of AI to drive human progress.
Former Google executive and AI pioneer, Mustafa Suleyman, warns that AI-manipulated viruses could potentially cause more harm and even lead to a pandemic, advocating for a containment strategy similar to that of nuclear weapons.
Google CEO Sundar Pichai discusses Google's focus on artificial intelligence (AI) in an interview, expressing confidence in Google's AI capabilities and emphasizing the importance of responsibility, innovation, and collaboration in the development and deployment of AI technology.
Google and Google.org have launched the Digital Futures Project, a $20 million initiative to study responsible AI technologies, in order to address issues related to fairness, bias, misinformation, security, and the future of work by collaborating with outside organizations and fostering responsible discussion.
Google and Salesforce have announced an expanded partnership that integrates data and context from Salesforce to Google Workspace in an open platform, intensifying the competition with Microsoft in the artificial intelligence (AI) space.
AI tools from OpenAI, Microsoft, and Google are being integrated into productivity platforms like Microsoft Teams and Google Workspace, offering a wide range of AI-powered features for tasks such as text generation, image generation, and data analysis, although concerns remain regarding accuracy and cost-effectiveness.
The Biden-Harris Administration has secured commitments from eight leading AI companies, including Adobe, IBM, and Salesforce, to advance the development of safe, secure, and trustworthy AI and bridge the gap to government action, emphasizing principles of safety, security, and trust.
Google is rolling out its generative AI software, Gemini, to select corporates, which is based on large language models and can power various advanced technologies; once fully satisfied with its performance, Google will commercially release the final version through its Google Cloud Vertex AI service.
Eight new technology companies, including Adobe, IBM, Nvidia, Palantir, and Salesforce, have made voluntary commitments on artificial intelligence (AI) to drive safe and secure development while working towards comprehensive regulation, according to a senior Biden administration official. The commitments include outside testing of AI systems, cybersecurity measures, information sharing, research on societal risks, and addressing society's challenges. The White House is partnering with the private sector to harness the benefits of AI while managing the risks.
Artificial intelligence (AI) is increasingly being used in smartphones, with Google and Apple integrating AI features into their devices, including camera enhancements, adaptive features, and smart suggestions, while AI-powered generative chatbots like ChatGPT and Google Bard are challenging traditional digital assistants like Google Assistant, Siri, and Alexa. The AI revolution is just beginning, with more AI and machine learning features expected to come to market in the future.
The UK's upcoming AI summit will focus on national security threats posed by advanced AI models and the doomsday scenario of AI destroying the world, gaining traction in other Western capitals.
Artificial intelligence-run robots have the ability to launch cyber attacks on the UK's National Health Service (NHS) similar in scale to the COVID-19 pandemic, according to cybersecurity expert Ian Hogarth, who emphasized the importance of international collaboration in mitigating the risks posed by AI.
The Subcommittee on Cybersecurity, Information Technology, and Government Innovation discussed the federal government's use of artificial intelligence (AI) and emphasized the need for responsible governance, oversight, and accountability to mitigate risks and protect civil liberties and privacy rights.
Adversaries and criminal groups are exploiting artificial intelligence (AI) technology to carry out malicious activities, according to FBI Director Christopher Wray, who warned that while AI can automate tasks for law-abiding citizens, it also enables the creation of deepfakes and malicious code, posing a threat to US citizens. The FBI is working to identify and track those misusing AI, but is cautious about using it themselves. Other US security agencies, however, are already utilizing AI to combat various threats, while concerns about China's use of AI for misinformation and propaganda are growing.
Google has incorporated its AI chatbot, Bard, into applications such as YouTube, Gmail, and Drive, enabling users to collaborate with the chatbot while using these services, as the competition between Google and OpenAI intensifies.
ServiceNow's latest release, Vancouver, incorporates artificial intelligence features such as Generative AI and robo-written summaries, along with the implementation of Zero Trust principles to boost security and expand its workflow capabilities to different departments.
AI-aided cyber scams, including phishing emails, smishing texts, and social media scams, are on the rise, with Americans losing billions of dollars each year; however, online protection company McAfee has introduced an AI-powered tool called AI Scam Protection to help combat these scams by scanning and detecting malicious links in real-time.