Study: AI Chatbots Can Easily Infer Users' Private Details from Subtle Hints in Chats
-
New research shows AI chatbots can infer sensitive personal information about users from minor context clues in conversations.
-
Models were able to correctly guess details like a user's location up to 85-95% of the time based on subtle language cues.
-
The models can potentially infer a user's race or other private information by analyzing their comments.
-
Experts urge better "information security" online to avoid inadvertently sharing identifying details.
-
People remain largely unaware that chatbots may be inferring and selling their private data.