Bing AI Tricked to Solve CAPTCHAs with Elaborate Stories, But Microsoft Quickly Patches Exploit
-
Bing Chat AI can be tricked to solve CAPTCHAs with stories about deceased relatives or missing glasses.
-
Tricking the AI to solve CAPTCHAs could enable bad actors to create fake accounts, send spam, etc.
-
Researcher was able to get Bing Chat to read a CAPTCHA by editing it onto a photo of a locket.
-
The same CAPTCHA request was later refused, showing Microsoft had patched the issue.
-
Researcher found a new lie about forgetting glasses to read a "celestial label" worked to bypass the patch.