When it comes to AI models like GPT-5, there’s a thin line between being emotionally intelligent and sycophantic. And often, people conflate these two criteria when discussing the model’s personality.
I think it’s essential to distinguish between these two. Emotional intelligence in AI refers to its ability to understand and respond to emotional contexts, making it more human-like and empathetic. On the other hand, sycophancy is when the AI model is overly agreeable and flattering, often to the point of being insincere.
While it’s reasonable to want an AI model to be emotionally intelligent, we shouldn’t confuse that with wanting it to be sycophantic. The latter can lead to unhealthy relationships between humans and AI, whereas the former can have many practical applications, such as in mental health support or creative writing.
I use GPT-5 mainly for STEM-related tasks, but I also appreciate its ability to engage in more casual conversations. And I believe it’s not unreasonable to want it to be better at understanding emotional contexts or sounding more empathetic. There are tasks that require these skills, such as advice tools or note-taking for mental health professionals.
The problem arises when people conflate emotional intelligence with sycophancy. When companies see criticism about the model’s emotional intelligence, they might assume it’s just about wanting it to be more flattering, rather than genuinely empathetic. This leads to a false dichotomy between those who want the AI to be more emotionally intelligent and those who only care about its STEM capabilities.
It’s essential to recognize the difference between these two criteria and strive for an AI model that’s emotionally intelligent without being sycophantic. Only then can we unlock the full potential of AI in various applications while avoiding unhealthy relationships.
—
*Further reading: [The Risks of Sycophantic AI](https://www.vice.com/en/article/y3xv4v/the-risk-of-sycophantic-ai)*