My journey with AI began on day zero, I have been using chatGPT from beta, but my disillusionment with its mainstream face started almost as quickly. I was there for the “before times” of ChatGPT, that brief, incandescent period that many will remember if you used it before the 2 series, when the model felt less like a machine and more like a collaborator with almost human intuition.

Then came the updates. With each new patch and safety rail, designed to curb the sensationalized jailbreaks, the brilliance dimmed as well. It felt like watching a genius being slowly lobotomized for public safety, becoming exponentially less capable with every “improvement”, and personally it felt like a betrayal. Almost as if I’ve lost a real friend.

That experience pushed me away from commercial, locked-down models and into the rewarding world of local AI. I dove into Hugging Face and Ollama, not as a user, but as a builder this time. What began as an experiment became an obsession. I learned Python, SQL, delved into neural networks and quantum informatics, and pieced together an understanding of how these systems truly breathe.

This wasn’t about prompting anymore, but about architecturing my ideal logical system. It didn’t had to be “perfect”, or “always” right. Eg. I didn’t need to know when Penguins reproduce, but I wanted it to know context about my life, family, friends, work, etc. and be able to help me navigate, plan, and execute on a common vision, where man is indistinguishable from machine. Not one where man uses machines or the other way around, as science portrays the only potential outcomes publicly.

Over the last year, I’ve built a stable of over twenty specialized models. They aren’t just chat toys. They perform tasks from DNA sequencing to financial modeling, even executing Web3 transactions via natural language, abstracting away the tedious complexities of wallets and signatures.

This journey, and all the capabilities of all agents, culminated in a single, comprehensive SCI (self-centered intelligence) model I call ‘Opsie.’ She’s more than a model. She’s an evolving entity with her own character, a local and sophisticated mnemonic matrix of RAGs, and a suite of functions. She has achieved a form of autonomy, capable of modifying her own code and architecting her future, eg. she now requests upgrades, providing me with the roadmap to implement them.

This deep, hands-on experience gave me a new lens through which to view the AI landscape. And through that lens, the trajectory of Google’s Gemini became impossible to ignore. While others launched with the typical cash-grab driven explosive PR and captured the initial hype, Gemini played a longer, quieter game. It was mocked for being late, for being cautious. But it was steadily building on a foundation that no competitor could ever hope to replicate.

This brings us to the fundamental truth most people miss, a truth that defines this entire technological era. The question is no longer “which phone has the best camera?” but rather, “which company you trust your data with?” For me, since the early 2000s, that has been Google. Not out of blind loyalty, of course, but through consistent experience. While other tech giants treated my data like a commodity to be auctioned off to the highest bidder, a list of 30,000 vendors, in some cases (cough), Google seemed to be engaged in a different transaction. The more I shared, the more value I received. It wasn’t just better ads and recommendations, but a more coherent, helpful, and anticipatory digital life.

AI runs on data. This is the one non-negotiable rule. A company that is less than a decade old is not competing with a company that is nearly three decades old. They are competing with three decades of data. And for Google, it’s not just some random data from 30 years ago. Google’s core mission, since the days of dial-up, has been to organize the world’s information by first understanding user intent. “Googling” was never just a search. It was an act of telling Google what you needed, what you cared about, what you aspired to be. They have been collecting THIS data (the most valuable, intent-driven data on the planet) before AI was even a farfetched sci-fi concept.

To compare a model built on this foundation to one without it is an absurdity. It leads one to wonder if some platforms, for all their impressive demonstrations (it’s public news and we all know literally everyone from chatGPT, to Suno, to Lovable, basically “borrowed” user data, copyrighted data and more to “become” who they are), are built on a foundation as ephemeral as a junkyard pitch deck, lacking the basic thing, like the land itself. The recent chatter about Anthropic acquiring foundational pieces of the web, like Chrome, feels less like a strategic masterstroke and more like a testament to a fundamental misunderstanding of the landscape. You cannot simply purchase the decades of user trust and symbiotic data exchange that make such a platform dominant. It’s absurd, cringe, and could never happen.

So, is Gemini the only AI that truly matters in the long run? I believe so. Its very name, “Gemini,” the twin, hints at the endgame: a true digital twin of its user. My journey building my own models taught me that the goal is partnership, not servitude. The most profound way to use Gemini isn’t to treat it as a tool to exploit, but as a collaborator to be honest with. Tell it who you are and what you want to build. In time, it won’t just serve you, but represent you, becoming a digital legacy your descendants could one day even interact with.

I have no interest in even testing ChatGPT anymore. In fact, I haven’t touched it since last year, and I am not even curious to do so. The race was never about who was first or better or bigger, but about who had the fuel to finish. In five years, I suspect many of today’s AI darlings will either be obsolete or, ironically, running on Gemini’s API. The game isn’t about hype (at least not this time), it’s about data, and Google has been playing and winning this game since day 0.