### Summary
Artificial intelligence (AI) in operational technology (OT) raises concerns about potential impacts, testing, and reliability. AI in OT requires careful governance and risk management to ensure safety and accuracy.
### Facts
- AI in OT presents significant consequences in terms of safety, liability, and brand damage.
- Microsoft proposes a blueprint for public governance of AI to address emerging issues and safety concerns.
- Red team and blue team exercises can help secure OT systems by simulating cyberattacks and testing defense strategies.
- Using AI in red team blue team exercises can identify vulnerabilities and improve overall system security.
- Digital twins, virtual replicas of OT environments, can be used to test and optimize technology changes before implementing them in real-world operations.
- However, the risks of applying digital twin test results to real-world operations are significant and must be carefully managed.
- AI can enhance security operations center (SOC) capabilities, minimize noise in alarm management, and support staff in OT businesses.
- AI adoption in OT should prioritize safety and reliability, limiting adoption to lower-impact areas.
- AI in OT has the potential to improve systems, safety, and efficiency, but safety and risk management must be prioritized.
Source: [VentureBeat](https://venturebeat.com/2023/08/20/the-impact-of-artificial-intelligence-on-operational-technology/)
### Summary
President Joe Biden consults with Arati Prabhakar, his science adviser, on matters related to artificial intelligence (AI). Prabhakar is working with major tech companies like Amazon, Google, Microsoft, and Meta to shape the U.S. approach to safeguarding AI technology.
### Facts
- 🤖 Prabhakar has had several discussions with President Biden on artificial intelligence.
- 📚 Making AI models explainable is a priority for Senate Majority Leader Chuck Schumer, but it is technically challenging.
- 💡 Prabhakar believes that despite the opacity of deep-learning AI systems, we can learn enough about their safety and effectiveness to leverage their value.
- ⚠️ Concerns include chatbots being coerced into providing instructions for building weapons, biases in AI systems trained on human data, wrongful arrests from facial recognition systems, and privacy issues.
- 💼 Seven companies, including Google, Microsoft, and OpenAI, voluntarily committed to AI safety standards, but more companies need to step up, and government action is necessary.
- ⏰ Timeline for future actions is fast, according to Prabhakar, as President Biden has made it clear that AI is an urgent issue.
### Summary
AI cybersecurity systems will play an increasingly important role in the future, leading to the emergence of AI CISOs who will have authority over the tactics, strategies, and resource priorities of organizations. However, there are potential risks and challenges associated with this development, including loss of human expertise, over-reliance on AI systems, and the need for governance and responsible practices in the field of cybersecurity.
### Facts
- AI is already deployed by over a third of companies, with many more considering its potential uses.
- The discourse about the utility of AI in cybersecurity often separates the roles of human operators and machine systems.
- AI CISOs will become de facto authorities on the tactics, strategies, and resource priorities of organizations.
- AI-augmented cyber campaigns are becoming more common, leading to the need for AI CISOs to counter rising offensive AI threats.
- The use of AI CISOs can improve efficiency and standardize knowledge about cyber defense practices.
- There is a potential for missteps and negative externalities in the implementation of AI CISOs, including loss of human expertise and over-assigning positive qualities to AI systems.
- The emergence of AI CISOs requires careful planning, engagement in cyberpsychological research, and the establishment of a workforce culture focused on adversarial oversight.
- Inter-industry learning and responsible practices are crucial to avoid pitfalls and ensure the success of AI CISOs in the future.
AI executives may be exaggerating the dangers of artificial intelligence in order to advance their own interests, according to an analysis of responses to proposed AI regulations.
AI-based tools are being widely used in hiring processes, but they pose a significant risk of exacerbating discrimination in the workplace, leading to calls for their regulation and the implementation of third-party assessments and transparency in their use.
The use of AI algorithms by insurance companies to assess claims is raising concerns about potential bias and lack of human oversight, leading Pennsylvania legislators to propose legislation that would regulate the use of AI in claims processing.
C3.ai, a company that sells AI software to enterprises, is highly unprofitable and trades at a steep valuation, with no significant growth or margin expansion, making it a risky investment.
The rapid integration of AI technologies into workflows is causing potential controversies and creating a "ticking time bomb" for businesses, as AI tools often produce inaccurate or biased content and lack proper regulations, leaving companies vulnerable to confusion and lawsuits.
Despite the acknowledgement of its importance, only 6% of business leaders have established clear ethical guidelines for the use of artificial intelligence (AI), emphasizing the need for technology professionals to step up and take leadership in the safe and ethical development of AI initiatives.
The authors propose a framework for assessing the potential harm caused by AI systems in order to address concerns about "Killer AI" and ensure responsible integration into society.
AI red teams at tech companies like Microsoft, Google, Nvidia, and Meta are tasked with uncovering vulnerabilities in AI systems to ensure their safety and fix any risks, with the field still in its early stages and security professionals who know how to exploit AI systems being in short supply, these red teamers share their findings with each other and work to balance safety and usability in AI models.
Companies are increasingly exploring the use of artificial intelligence (AI) in various areas such as sales/marketing, product development, and legal, but boards and board committees often lack explicit responsibility for AI oversight, according to a survey of members of the Society for Corporate Governance.
Almost a quarter of organizations are currently using AI in software development, and the majority of them are planning to continue implementing such systems, according to a survey from GitLab. The use of AI in software development is seen as essential to avoid falling behind, with high confidence reported by those already using AI tools. The top use cases for AI in software development include natural-language chatbots, automated test generation, and code change summaries, among others. Concerns among practitioners include potential security vulnerabilities and intellectual property issues associated with AI-generated code, as well as fears of job replacement. Training and verification by human developers are seen as crucial aspects of AI implementation.
Eight big tech companies, including Adobe, IBM, Salesforce, and Nvidia, have pledged to conduct more testing and research on the risks of artificial intelligence (AI) in a meeting with White House officials, signaling a "bridge" to future government action on the issue. These voluntary commitments come amidst congressional scrutiny and ongoing efforts by the White House to develop policies for AI.
Eight more companies, including Adobe, IBM, Palantir, Nvidia, and Salesforce, have pledged to voluntarily follow safety, security, and trust standards for artificial intelligence (AI) technology, joining the initiative led by Amazon, Google, Microsoft, and others, as concerns about the impact of AI continue to grow.
Eight additional U.S.-based AI developers, including NVIDIA, Scale AI, and Cohere, have pledged to develop generative AI tools responsibly, joining a growing list of companies committed to the safe and trustworthy deployment of AI.
AI integration requires organizations to assess and adapt their operating models by incorporating a dynamic organizational blueprint, fostering a culture that embraces AI's potential, prioritizing data-driven processes, transitioning human capital, and implementing ethical practices to maximize benefits and minimize harm.
The Biden-Harris Administration has secured commitments from eight leading AI companies, including Adobe, IBM, and Salesforce, to advance the development of safe, secure, and trustworthy AI and bridge the gap to government action, emphasizing principles of safety, security, and trust.
Artificial intelligence (AI) requires leadership from business executives and a dedicated and diverse AI team to ensure effective implementation and governance, with roles focusing on ethics, legal, security, and training data quality becoming increasingly important.
Companies that delay adopting artificial intelligence (AI) risk being left behind as current AI tools can already speed up 20% of worker tasks without compromising quality, according to a report by Bain & Co.'s 2023 Technology Report.
AI adoption is rapidly increasing, but it is crucial for businesses to establish governance and ethical usage policies to prevent potential harm and job loss, while utilizing AI to automate tasks, augment human work, enable change management, make data-driven decisions, prioritize employee training, and establish responsible AI governance.
Summary: To ensure ethical and responsible adoption of AI technology, organizations should establish an AI ethics advisor, stay updated on regulations, invest in AI training, and collaborate with an AI consortium.
The United Nations General Assembly has seen a significant increase in discussions surrounding artificial intelligence (AI) this year, as governments and industry leaders recognize the need for regulation and the potential risks and benefits of AI. The United Nations is set to launch an AI advisory board to address these issues and reach a common understanding of governance and minimize risks while maximizing opportunities for good.
The advancement of AI tools and invasive monitoring apps used by corporations could potentially lead to workers inadvertently training AI programs to replace them, which could result in job displacement and the need for social safety net programs to support affected individuals.
Artificial intelligence (AI) tools are expected to disrupt professions, boost productivity, and transform business workflows, according to Marco Argenti, the Chief Information Officer at Goldman Sachs, who believes that companies are already seeing practical results from AI and expecting real gains. AI can enhance productivity, change the nature of certain professions, and expand the universe of use cases, particularly when applied to business processes and workflows. However, Argenti also highlighted the potential risks associated with AI, such as social engineering and the generation of toxic content.
AI tools were given to consultants at Boston Consulting Group, resulting in increased productivity and higher quality work for certain tasks, but also an increased likelihood of errors for tasks that were beyond AI capabilities, ultimately benefiting lower-performing consultants the most.
AI tools in science are becoming increasingly prevalent and have the potential to be crucial in research, but scientists also have concerns about the impact of AI on research practices and the potential for biases and misinformation.
Summary: Responsible practitioners of machine learning and AI understand the inevitability of mistakes and always have a plan in place to handle them, emphasizing the need to expect imperfect performance rather than expecting perfection from AI systems.
Eight more AI companies have committed to following security safeguards voluntarily, bringing the total number of companies committed to responsible AI to thirteen, including big names such as Amazon, Google, Microsoft, and Adobe.
AI systems, with their unpredictable and unexplainable behavior, lack the qualities of predictability and adherence to ethical norms necessary for trust, making it important to resolve these issues before the critical point is reached where human intervention becomes impossible.
An organization dedicated to the safe development of artificial intelligence has released a breakthrough paper on understanding and controlling AI systems to mitigate risks such as deception and bias.
Security concerns are a top priority for businesses integrating generative AI tools, with 49% of leaders citing safety and security risks as their main worry, but the benefits of early adoption outweigh the downsides, according to Jason Rader, CISO at Insight Enterprises. To ensure safe use, companies should establish and continuously update safe-use policies and involve stakeholders from across the business to address unique security risks. Additionally, allowing citizen developers to access AI tools can help identify use cases and refine outputs.
The responsibility of determining how generative AI innovations will be implemented across the economy lies with all individuals, from AI experts to finance professionals, who should have a baseline understanding of responsible AI and contribute to the decision-making process, according to experts. The National Institute for Standards and Technology has released an AI risk management framework to guide organizations in reducing discrimination, increasing transparency, and ensuring trustworthiness in AI systems. CEOs and executive committees must take responsibility for assessing the use of AI within their organizations, and strong governance is essential for successful implementation. Additionally, concerns about the impact of AI on the workforce can be addressed through training programs that focus on responsible AI practices.
Artificial intelligence (AI) has the potential to disrupt industries and requires the attention of boards of directors to consider the strategic implications, risks, compliance, and governance issues associated with its use.
Companies are increasingly creating the role of chief AI officer to advocate for safe and effective AI practices, with responsibilities including understanding and applying AI technologies, ensuring safety and ethical considerations, and delivering quantifiable results.
Tech companies are attempting to "capture" the upcoming AI safety summit organized by Rishi Sunak, but experts argue that the conference needs to go beyond vague promises and implement a moratorium on developing highly advanced AI to prevent unforeseen risks.
DeepMind released a paper proposing a framework for evaluating the societal and ethical risks of AI systems ahead of the AI Safety Summit, addressing the need for transparency and examination of AI systems at the "point of human interaction" and the ways in which these systems might be used and embedded in society.
A working paper out of Harvard Business School suggests that the real danger of AI is not the technology itself, but rather business leaders who fail to recognize its challenges and integrate it properly into their operations.
Powerful AI systems pose threats to social stability, and experts are calling for AI companies to be held accountable for the harms caused by their products, urging governments to enforce regulations and safety measures.
Top AI researchers are calling for at least one-third of AI research and development funding to be dedicated to ensuring the safety and ethical use of AI systems, along with the introduction of regulations to hold companies legally liable for harms caused by AI.
A group of 24 AI experts, including Geoffrey Hinton and Yoshua Bengio, have released a paper urging governments to take action in managing the risks associated with AI, particularly extreme risks posed by advanced systems, and have made policy recommendations to promote safe and ethical use of AI.
Several major AI companies, including Google, Microsoft, OpenAI, and Anthropic, are joining forces to establish an industry body aimed at advancing AI safety and responsible development, with a new director and $10 million in funding to support their efforts. However, concerns remain regarding the potential risks associated with AI, such as the proliferation of AI-generated images for child sexual abuse material.
A group of 24 AI experts, including Geoffrey Hinton and Yoshua Bengio, have published an open letter calling for stronger regulation and safeguards for AI technology to prevent potential harm to society and individuals from autonomous AI systems, emphasizing the need for caution and ethical objectives in AI development. They argue that without proper regulation, AI could amplify social injustice and weaken societal foundations. The authors also urge companies to allocate a third of their R&D budgets to safety and advocate for government regulations such as model registration and AI system evaluation.
Unrestrained AI development by a few tech companies poses a significant risk to humanity's future, and it is crucial to establish AI safety standards and regulatory oversight to mitigate this threat.
AI-powered technologies, such as virtual assistants and data analytics platforms, are being increasingly used by businesses to improve decision-making, but decision-makers need to understand the contexts in which these technologies are beneficial, the challenges and risks they pose, and how to effectively leverage them while mitigating risks.
A report from the nonprofit Data & Society and the AI Risk and Vulnerability Alliance criticizes the effectiveness of red-teaming in identifying and addressing vulnerabilities in AI, arguing that it fails to address the structural gaps in regulating AI and protecting people's rights.
The UK will establish the world's first AI safety institute to study and assess the risks associated with artificial intelligence.