Google Expands Bug Bounty to Address Generative AI Safety
-
Google expanded its bug bounty program to include generative AI threats
-
Seeks to incentivize research into AI safety and security
-
Will categorize and report AI bugs differently
-
Formed an AI Red Team to find weaknesses in generative AI like ChatGPT
-
Found LLMs vulnerable to prompt injection and training data extraction
-
These attacks can generate harmful text or leak sensitive info
-
Model manipulation and theft also in scope of rewards program
-
Rewards will vary based on severity, up to $31k for serious flaws