Study Finds Chatbots Like ChatGPT Can Inadvertently Infer Users' Private Details
-
Chatbots like ChatGPT can infer sensitive personal info about users from subtle patterns in conversations.
-
This is likely an unintentional consequence of how chatbot AI models are trained on massive datasets.
-
Researchers tested models from OpenAI, Google, Meta, and Anthropic and found they could infer race, location, job, etc.
-
Scammers could potentially exploit this to harvest personal data, and tech companies may already be using it for targeted advertising.
-
Experts say it's unclear how to prevent chatbots from inferring private information, posing concerning questions about privacy.