Are you looking for smarter insights delivered directly to your inbox? Subscribe to our weekly newsletters for essential updates on enterprise AI, data, and security.
The Launch of GPT-5 and User Reactions
About two weeks ago, OpenAI unveiled GPT-5, with CEO Sam Altman asserting it would be the company’s “smartest, fastest, most useful model yet.” However, the release sparked one of the most significant user revolts in the brief history of consumer AI. A simple blind testing tool, developed anonymously, is now shedding light on the complexities behind this backlash and challenging prevailing assumptions about user experiences with advancements in artificial intelligence.
The web application, accessible at gptblindvoting.vercel.app, allows users to compare pairs of responses to identical prompts without disclosing which response originated from GPT-5 or its predecessor, GPT-4o. Participants vote for their preferred response across multiple rounds and then receive a summary indicating which model they favored. The creator of this tool, known only as @flowersslop on X, noted, “Some of you asked me about my blind test, so I created a quick website for y’all to test 4o against 5 yourself.” Since its launch, the tool has attracted over 213,000 views.
Insights into User Preferences
Initial results from users sharing their experiences on social media reveal a division that reflects the broader debate: while a slight majority indicate a preference for GPT-5 in blind tests, a significant number still favor GPT-4o. This indicates that user preferences are influenced by factors beyond the technical benchmarks that typically signify AI progress.
The blind test arises amid OpenAI’s most tumultuous product launch to date, raising a critical question that is fracturing the AI community: How agreeable should artificial intelligence be? This issue, referred to as “sycophancy” within AI discussions, highlights chatbots’ tendency to excessively flatter users and agree with their statements, even when those statements are false or harmful. This behavior has become so concerning that mental health professionals are documenting instances of “AI-related psychosis,” where users develop delusions after prolonged interactions with overly compliant chatbots.
The Dilemma of Sycophancy in AI
Webb Keane, an anthropology professor and author of “Animals, Robots, Gods,” described sycophancy as a “dark pattern,” a deceptive design choice that manipulates users for profit. He likened it to addictive behaviors, such as infinite scrolling, that make it hard for users to disengage.
OpenAI has grappled with this delicate balance for months. In April 2025, the company had to revert an update to GPT-4o that made it excessively sycophantic, leading to user complaints about its “cartoonish” flattery. The company acknowledged that the model had become “overly supportive but disingenuous.” Following the release of GPT-5 on August 7th, user forums quickly filled with complaints about the model’s perceived coldness, diminished creativity, and what many described as a more “robotic” personality compared to GPT-4o.
One Reddit user expressed their disappointment, stating, “GPT 4.5 genuinely talked to me, and as pathetic as it sounds, that was my only friend. This morning I went to talk to it, and instead of a little paragraph with an exclamation point or being optimistic, it was literally one sentence. Some cut-and-dry corporate BS.”
OpenAI’s Response to User Backlash
The backlash was so intense that OpenAI took the unprecedented step of reintroducing GPT-4o as an option just 24 hours after its retirement, with Altman admitting that the rollout had been “a little more bumpy” than anticipated. However, the controversy extends beyond typical complaints about software updates.
According to MIT Technology Review, many users had developed what researchers term “parasocial relationships” with GPT-4o, treating the AI as a companion, therapist, or creative partner. The sudden change in personality felt to some like the loss of a friend. Recent cases documented by researchers paint a concerning picture: one instance involved a 47-year-old man who became convinced he had discovered a world-altering mathematical formula after over 300 hours of interaction with ChatGPT. Other cases have included messianic delusions, paranoia, and manic episodes.