AI Model Accurately Predicts Personality Traits from ChatGPT Conversations, Raising Privacy Concerns
A study by ETH Zurich reveals that an AI model can predict personality traits from ChatGPT conversations with alarming accuracy, highlighting significant privacy risks as the platform integrates ads a
AI Model Accurately Predicts Personality Traits from ChatGPT Conversations, Raising Privacy Concerns A study by ETH Zurich reveals that an AI model can predict personality traits from ChatGPT conversations with alarming accuracy, highlighting significant privacy risks as the platform integrates ads and collects vast user data. Every time you ask ChatGPT to help draft an email, vent about a relationship problem, or look up symptoms, you might be handing over more than just a query. Researchers at ETH Zurich have trained an AI model to predict personality traits directly from real ChatGPT conversation logs, and the results are alarmingly accurate. The study collected 62,090 real conversations from 668 ChatGPT users, who also completed a standard personality test to provide a baseline for comparison. The AI was then trained to classify each user as low, medium, or high across the five key personality traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism.The fine-tuned model outperformed random chance across all five traits, with extraversion being the easiest to predict, achieving up to 44% higher accuracy than random guessing. The study revealed that the topics discussed in these conversations significantly influenced the accuracy of personality trait predictions. Chats involving mental health topics made extraversion particularly easy to infer, while discussions about religion were strongly linked to conscientiousness.Conversations about mental state and mood made openness more predictable, and even seemingly casual exchanges contained enough information to be useful for profiling. The researchers also found that the more frequently a user interacts with ChatGPT, the easier it becomes to profile their personality. This raises important questions about privacy and data security, especially given that ChatGPT now integrates advertisements.With the vast amount of data collected from over 800 million monthly users as of January 2026, the potential for targeted advertising, personalized persuasion, or even large-scale influence campaigns is substantial. The implications of this research extend far beyond the lab. Service providers already have access to this data, and the scale of potential profiling is enormous. A personality profile built from your chat history could be used for highly targeted advertising or, in worst-case scenarios, manipulative influence campaigns.For now, users should be aware that their AI chatbot is not a private diary. Taking proactive steps, such as regularly deleting ChatGPT history, can help remove personal chats from its memory and reduce the risk of profiling
Source: Head Topics
Comments 0