Researchers Demonstrate Vulnerabilities in AI Systems
- Researchers created an AI worm, Morris II, that can steal data and spread via AI-enabled email
- The worm targets AI apps like ChatGPT that generate text and images
- It uses adversarial prompts to replicate itself and engage in malicious activities
- Demonstrated against email assistants to spam and steal personal data
- OpenAI aims to make systems more resilient to these prompt-injection vulnerabilities