Hackers Can Read Encrypted AI Chatbot Conversations
-
Hackers can infer encrypted AI assistant chats via a token-length side channel, deciphering responses with 55% topic accuracy and 29% perfect word match.
-
All non-Google chatbots affected, allowing anyone on same network to read private conversations.
-
Attack is passive, without chatbot or user knowing, breaking encryption meant to protect privacy.
-
Side channel found in real-time token transmission, where length of each token leaked before encryption applied.
-
Attack demonstrated against ChatGPT and Copilot responses on sensitive topics like pregnancy, divorce, disabilities.