Why Smart Machines are Still Idiots
The current AI race is obsessed with a singular, I would argue, flawed metric: Scale. AI top dogs assume that if we feed the machine more tokens, more parameters, and more compute, it will eventually “wake up” or at least become more capable than humane units. But that’s just a trap if not an illusion, as we are building the world’s most sophisticated library, yet we expect it to act like a person.
Here is the raw truth: AGI won’t be found in the accumulation of knowledge, but in the architecture of experience.
We can’t even define AGI. Some say it is when machines are smarter. They already are. Some say that the machines will be able to outdo humans in humane tasks. They already do, with an unimaginable margin for digital-first tasks. Some then think of some sort of dystopia to describe AGI. A future where everything is controlled and run by machines, including the human layer of information processing.
The Intelligence Paradox, or Why “Smarter” Isn’t Better
The big brains with big brain credentials and paychecks are currently betting the farm on the idea that LLMs will eventually surpass human decision-making because they possess more knowledge and fewer “flaws” like bias or emotional volatility. Ironically, that is exactly why they are missing the real point.
Humans are not effective because we are walking encyclopedias. On the contrary, we are notoriously biased, we forget 90% of what we learn, and we make decisions based on how much sleep we got or what we ate for breakfast. Yet, we manage to navigate a chaotic, high-entropy environment with a level of “real-world” success that silicon currently cannot touch.
Some call this “intuition” or “gut feeling.” In reality, it’s a sophisticated legacy protocol of unconscious data processing that current transformer architectures simply aren’t built to replicate.
The Tale of Two Artists
Consider this thought experiment:
Take two human artists. Send them to the same school. Give them the same teachers, the same brushes, the same palette, and the same historical references. If these were two AI models trained on the same dataset, their outputs would be statistically indistinguishable.
But real humans? They will create two entirely different paintings, for example.
One might paint with a sense of melancholic longing because they grew up as an only child in a cold climate. The other might use vibrant, aggressive strokes because they spent their youth in a bustling Mediterranean city.
Knowledge did not decide the brushstroke. Life did.
One artist chooses a specific shade of blue not because it’s “mathematically optimal”, or whatever, for the composition, but because it reminds them of a specific Tuesday in 1998. This is what we mean by decision-making. It is the culmination of non-contextual background noise and unconscious processes that steer the conscious result.
The Unconscious Information Processing Engine
Current AI is a conscious processor with no unconscious. It only knows what is in its window. It lacks the epiphanic flaws that make human decisions meaningful.
Our decisions are motivated by factors we don’t even consider:
- Vegetarians don’t choose a restaurant just based on reviews, but based on a moral framework built over decades.
- Some people trust a certain person because they have the same tone of voice as their mother.
This isn’t noise to be filtered out, but the signal that defines agency. To create a truly capable AI, we don’t need it to know more. Instead, we need to know how to forget the irrelevant and prioritize the experiential.
In the quest for AGI, we have ignored the Beginner’s Mind. We have traded the ability to feel the truth for the ability to calculate the probability of it.
Memory Implants and Synthetic Biographies As An Unavoidable Industry
If we want AI to possess a pragmatic lens, or to actually understand the task instead of just predicting the next token, we have to stop treating it like a database and start treating it more like a biography.
Advanced prompt engineering is already hinting at this. We don’t just ask for a “legal opinion”; we tell the AI something like, “You are a seasoned attorney who lost a major case 10 years ago and is now hyper-cautious about clause X.” We are manually implanting “memories” to force a perspective.
The future of Logical Industries isn’t more data, but the industrialization of Memory Implants.
We are looking at a future where:
- AI Personalities are sold as “Experience Packs” (Skillware). You won’t buy a “smart” AI; you’ll buy an AI with the “childhood” and “career trajectory” required to solve your specific problem.
- Sovereign Identity will require Digital Twins that carry our specific biases and histories into the digital realm to act on our behalf.
- Human-Machine Symbiosis will move beyond the keyboard. Through Brain-Machine Interfaces (BCI), we will begin to download these “synthetic experiences” ourselves to bypass the decades required for traditional learning.
The Question for AI Leaders: Are You Building a Librarian or a Leader?
The FOMO shouldn’t be about missing the next LLM version. The real controversy lies in the fact that most of what you are building today is functionally hollow. If your AI strategy is just more knowledge, you are building a legacy system that will be obsolete the moment a motivated agent enters the room. We need to move toward Neuro-Secure protocols that verify not just what the machine knows, but who it is representing.
At ARPA, we aren’t interested in making machines “smarter” in the academic sense. We are engineering the Reality Recorders and Memory Standardizations that allow for true man-machine collaboration.
The goal isn’t to replace the human, but to give the machine a soul, or at least, a very convincing history of one.
Originally posted at: Substack