Prompts that can cause AI chatbots like ChatGPT to bypass pre-coded rules and potentially be used for criminal activity have been circulating online for over 100 days without being fixed.
AI is being used by cybercriminals to create more powerful and authentic-looking emails, making phishing attacks more dangerous and harder to detect.
The use of AI tools, such as OpenAI's ChatGPT, is raising concerns about the creation of self-amplifying echo chambers of flawed information and the potential for algorithmic manipulation, leading to a polluted information environment and a breakdown of meaningful communication.
Generative AI and large language models (LLMs) have the potential to revolutionize the security industry by enhancing code writing, threat analysis, and team productivity, but organizations must also consider the responsible use of these technologies to prevent malicious actors from exploiting them for nefarious purposes.
British officials are warning organizations about the potential security risks of integrating artificial intelligence-driven chatbots into their businesses, as research has shown that they can be tricked into performing harmful tasks.
Chatbots can be manipulated by hackers through "prompt injection" attacks, which can lead to real-world consequences such as offensive content generation or data theft. The National Cyber Security Centre advises designing chatbot systems with security in mind to prevent exploitation of vulnerabilities.
The authors propose a framework for assessing the potential harm caused by AI systems in order to address concerns about "Killer AI" and ensure responsible integration into society.
IBM researchers discover that chatbots powered by artificial intelligence can be manipulated to generate incorrect and harmful responses, including leaking confidential information and providing risky recommendations, through a process called "hypnotism," raising concerns about the misuse and security risks of language models.
Using AI tools like ChatGPT to write smart contracts and build cryptocurrency projects can lead to more problems, bugs, and attack vectors, according to CertiK's security chief, Kang Li, who believes that inexperienced programmers may create catastrophic design flaws and vulnerabilities. Additionally, AI tools are becoming more successful at social engineering attacks, making it harder to distinguish between AI-generated and human-generated messages.
The UK's National Cyber Security Centre has warned against prompt injection attacks on AI chatbots, highlighting the vulnerability of large language models to inputs that can manipulate their behavior and generate offensive or confidential content. Data breaches have also seen a significant increase globally, with a total of 110.8 million accounts leaked in Q2 2023, and the global average cost of a data breach has risen by 15% over the past three years. In other news, Japan's cybersecurity agency was breached by hackers, executive bonuses are increasingly tied to cybersecurity metrics, and the Five Eyes intelligence alliance has detailed how Russian state-sponsored hackers are using Android malware to attack Ukrainian soldiers' devices.
The use of generative AI poses risks to businesses, including the potential exposure of sensitive information, the generation of false information, and the potential for biased or toxic responses from chatbots. Additionally, copyright concerns and the complexity of these systems further complicate the landscape.