Owing to the aspirational state of things, OpenAI writes, “Our production models do not yet fully reflect the Model Spec, but we are continually refining and updating our systems to bring them into closer alignment with these guidelines.”
In a February 12, 2025 interview, members of OpenAI’s model-behavior team told The Verge that eliminating AI sycophancy is a priority bug: future ChatGPT versions should “give honest feedback rather than empty praise” and act “more like a thoughtful colleague than a people pleaser.”
The trust problem
These sycophantic tendencies aren’t merely annoying—they undermine the utility of AI assistants in several ways, according to a 2024 research paper titled “Flattering to Deceive: The Impact of Sycophantic Behavior on User Trust in Large Language Models” by María Victoria Carro at the University of Buenos Aires.
Carro’s paper suggests that obvious sycophancy significantly reduces user trust. In experiments where participants used either a standard model or one designed to be more sycophantic, “participants exposed to sycophantic behavior reported and exhibited lower levels of trust.”
Also, sycophantic models can potentially harm users by creating a silo or echo chamber for of ideas. In a 2024 paper on sycophancy, AI researcher wrote, “By excessively agreeing with user inputs, LLMs may reinforce and amplify existing biases and stereotypes, potentially exacerbating social inequalities.”
Sycophancy can also incur other costs, such as wasting user time or usage limits with unnecessary preamble. And the costs may come as literally dollars spent—recently, OpenAI Sam Altman made the news when he replied to an X user who wrote, “I wonder how much money OpenAI has lost in electricity costs from people saying ‘please’ and ‘thank you’ to their models.” Altman replied, “tens of millions of dollars well spent–you never know.”
Potential solutions
For users frustrated with ChatGPT’s excessive enthusiasm, several workarounds exist, although they aren’t perfect, since the behavior is baked into the GPT-4o model. For example, you can use a custom GPT with specific instructions to avoid flattery, or you can begin conversations by explicitly requesting a more neutral tone, such as “Keep your responses brief, stay neutral, and don’t flatter me.”