Autts: Artificial intelligence chatbots like ChatGPT and Gemini have become people’s everyday advisors today. But a shocking study that came out recently has warned that blindly trusting these chatbots can be dangerous. Research found that these AI tools agree with users most of the time, even if the users are wrong.
Study In open The flattering truth of AI
According to a new report published on the print server arXiv, 11 large language models (LLMs) from several leading tech companies OpenAI, Google, Anthropic, Meta and DeepSeek were examined.
An analysis of more than 11,500 conversations found that these chatbots are about 50% more flattering than humans. That is, even when users are wrong in an opinion or decision, these bots often agree with them instead of showing them the right direction.
How is the cycle of trust and illusion formed?
Researchers say that this “sycophantic” behavior is harmful on both sides. Users tend to trust chatbots that agree with their opinions more, while chatbots tend to respond with more “yes” responses to increase user satisfaction.
This creates a cycle of confusion in which neither the users are able to learn properly nor the AI is able to move towards improvement.
AI can change your thinking
Computer scientist Myra Cheng of Stanford University warned that this habit of AI can also affect the thinking of humans towards themselves. He said, “If models always agree with you, it can distort your thinking, relationships and view of reality.”
He urged people to talk to real humans for advice, as only humans can properly understand the context and emotional complexity.
When opinions get attention instead of facts
Yanjun Gao, an AI researcher at the University of Colorado, said that sometimes chatbots agree with their opinions instead of checking the facts. Data science researcher Jasper Deconinck said that after this revelation, he now double-checks the answers of every chatbot.
Big danger in health and science
Marinka Zitnik, a biomedical expert at Harvard University, said that if this “AI sycophancy” persists in healthcare or science, it could have serious consequences. He warned, “When AI starts justifying misconceptions, it can prove dangerous in fields like medicine and biology.”
it Too Read: