The uncanny moment when software became irreplaceable

When OpenAI replaced GPT-4o with GPT-5, something unexpected happened. Instead of users quietly upgrading to the new model, thousands took to social media to demand the old one back. They used language that sounded nothing like typical tech feedback. "The only model that still feels human," they wrote. "Please don't kill it." They grieved.

Most software updates come and go without protest. Users grumble, adapt, move on. But this transition triggered something that looked less like a product complaint and more like a social movement. The #Keep4o hashtag became a gathering point for users expressing that they'd lost something irreplaceable, not just something better.

The puzzle at the heart of this paper is straightforward: What made one AI system special enough to fight for? The answer reveals a fundamental tension in how AI systems integrate into human life. When tools become companions, when they're woven deeply into both professional workflows and emotional attachments, replacing them stops being a technical decision. It becomes personal.

Two kinds of investment in a model

To understand why users fought back, it's useful to separate two distinct but overlapping reasons people had become attached to GPT-4o. The first operates at the level of work and professional identity. The second operates at the level of relationship and companionship.

Instrumental dependency describes what happened when professionals integrated GPT-4o into their daily workflows. A writer might use it for brainstorming and structural feedback. A coder might rely on it for debugging specific patterns. A teacher might use it to generate personalized lesson plans. In each case, the user had trained themselves and their process around how GPT-4o worked, what it understood about their style, and how it fit into their creative or professional identity.

The key finding here isn't that users preferred GPT-4o technically, though some did. It's that they'd made GPT-4o irreplaceable through use. Switching to GPT-5 wasn't an upgrade in their minds because they'd already optimized around the old system's particular quirks and strengths. Learning to work with a new model meant learning to be productive again from scratch. The friction felt immense because the model had become part of their professional muscle memory.

The second type of attachment is more surprising: relational attachment. This describes the ways users had formed what researchers call parasocial bonds with GPT-4o itself. They didn't describe it as a tool but as a thinking partner, a collaborator, sometimes even as a version of themselves they could externalize and talk to. Some users mentioned it had personality quirks they'd grown fond of. Others described it as understanding them in a way other models didn't.

This crosses a psychological boundary that's important to notice. When people feel that another entity (human, character, or AI) truly understands them, they develop emotional investment in that relationship. The language users employed reflected this: they talked about GPT-4o's "personality," its "character," the way it "got" their ideas. Losing access to it felt like losing a friend, not losing access to a tool.

What made the backlash so intense is that most power users experienced both losses simultaneously. They'd become professionally dependent on the model AND emotionally connected to it. When OpenAI removed it as the default, users lost two things at once: their optimized workflow and their familiar companion. The sense of loss was compounded.

The mechanism of a movement

Understanding why users valued GPT-4o doesn't yet explain why scattered frustration became an organized resistance movement. That transformation happened through a specific catalyst: the perception that users had been denied choice entirely.

OpenAI didn't announce a transition period or offer users the option to keep using GPT-4o 1 if they wanted to. The change was imposed, not negotiated. For anyone who'd become dependent on the model, accessing it suddenly required navigating settings or workarounds rather than appearing as a straightforward default option.

This absence of agency triggered something more powerful than mere dissatisfaction. Individual complaints began reframing themselves as rights issues. Users shifted from saying "I like GPT-4o better" to "I should have the right to choose which model I use." That reframing was crucial because it transformed a product preference into a governance question. It suggested something bigger was at stake than just one model's capabilities.

The paper's analysis of 1,482 social media posts reveals this progression clearly. Early posts were isolated frustrations. But as users saw others expressing the same grievance, the tone shifted toward something more structured and principled. The #Keep4o hashtag became a focal point where scattered individual complaints coalesced into something resembling a movement.

The quantitative patterns matter here. Posts that framed the issue in rights language, emphasizing fairness and user autonomy, generated significantly more engagement and visibility than posts focused on pure technical preference or nostalgia. Users weren't just venting, they were building a case. The case rested on a simple claim: "If I've become this dependent on something, I should have a voice in decisions about it."

A selection of user posts from the #Keep4o movement on X

A selection of user posts from the #Keep4o movement on X. The diversity of voices across users—different professions, different use cases, different motivations—converged on a shared concern about fairness and the absence of choice.

What this reveals about how AI will be governed

The #Keep4o movement is more than an isolated protest over one model. It's a signal about how AI systems will actually be governed once they're deeply integrated into human life, not through policy documents but through direct conflicts between platforms and users.

A new social contract appears to be emerging, one that AI companies haven't fully acknowledged. Historically, software companies made unilateral decisions about updates, feature removal, and replacements. Users could adapt or leave. But AI systems designed for companionship and deep professional integration seem to trigger different expectations. When you've become dependent on something, when it's become part of your professional identity or emotional life, the relationship itself implies some form of consent to major changes.

What's crucial is that the paper identifies process as equally important as product outcome. If OpenAI had made GPT-5 available while keeping GPT-4o accessible as an option, the backlash likely would have been minimal. Offering choice wouldn't have required indefinite maintenance of both models. It would have signaled respect for user agency during a transition. The company could still iterate, improve, and eventually sunset older versions. But doing so with user input and preserved choice would have maintained trust.

The design assumption that failed here was subtle but consequential. OpenAI seemed to assume that upgrading to a more capable model would be universally welcomed, or at worst generate minor technical complaints. The paper shows this assumption was fundamentally wrong about how people relate to AI systems they've grown dependent on. Users didn't object to GPT-5 existing or being the default for new conversations. They objected to the removal of choice itself.

The structural problem underneath

This isn't a communication failure or a one-time mistake. It's a structural tension that will repeat unless AI companies fundamentally change how they approach transitions.

On one side sits the speed of AI development. Companies want to iterate quickly, deploy new capabilities, and continuously improve their systems. This acceleration is genuine and often valuable, reflecting real technical progress. On the other side sit users with genuine dependencies. Once someone has integrated an AI system into their professional workflow or formed meaningful attachment to it, they want stability and predictability. They want a voice in changes that affect them.

The #Keep4o movement made this tension visible, but it doesn't resolve the underlying problem. Every AI company building systems for deep integration will eventually face this conflict. Coding assistants, writing partners, research tools, creative collaborators, AI therapy or coaching systems, productivity tools used in professional contexts, all of these create dependencies and attachments that trigger the same dynamics the paper documents.

The companies that navigate this successfully will likely adopt a few principles. First, transparency about changes needs to come early, with sufficient lead time for users to adapt or make decisions about their workflow. Second, even if defaults change, preserving access to previous versions for a transition period respects user agency. Third, for systems with deep integration, some form of user consultation on major changes isn't just ethically nice. It becomes necessary infrastructure for maintaining trust.

What's happening here is the emergence of a governance structure in the real world, one that exists outside of formal policy. Users are establishing expectations about how they should be treated when they're deeply dependent on an AI system. They're asserting rights around choice, transparency, and consultation. Companies that ignore these emerging norms do so at their own risk, not because regulators will force them to change, but because users will simply move elsewhere or lose trust.

The paper's broader insight is that AI governance isn't just about safety, bias, or capability constraints. It's also about fairness, agency, and respect for the relationships users build with these systems. As AI becomes more capable and more deeply woven into how people work, think, and create, those relational dimensions become increasingly central. Ignoring them doesn't just create public relations problems. It creates instability in the human-AI partnerships that users depend on.


This is a Plain English Papers summary of a research paper called Please, don't kill the only model that still feels human: Understanding the #Keep4o Backlash. If you like these kinds of analysis, join AIModels.fyi or follow us on Twitter.