Over the weekend, we had a bit of virality named Openclaw. It started as Clawdbot but had to change its name to Moltbot due to a cease and desist by Anthropic according to the owner of Moltbot, then it finally became Openclaw.

What is this you might ask? It is a personal AI that works in your device and does administrative tasks for you, you can connect it to different interfaces such as Whatsapp, Telegram etc and it autonomously does things for you. This project became a sensation that almost everyone started creating their own Openclaw bot, it even led to high resell price for Mac minis because people were spinning up bots using them. Both technical and non-technical people were (and still) building theirs.

What made it more sensational was the creation of Moltbook, a place for these agents to hang out. We started seeing screenshots of what these agents were posting — some hilarious, some raising eyebrows, and some downright chilling. Different postulations were being spun up and to be honest, it actually created comedic relief and also kept people busy.

Personal experience

I tried out Openclaw by using a Thinkpad that was relatively new in the event something happens, I’m able to nuke the entire system. My setup followed this pattern:

I tried running the agent but couldn’t get it to work. At that point I started wondering if it was my fault due to improper setup or that Openclaw wasn’t that ready to run complex, multi-turn tasks.

When Things Went South

It started going downhill when people started spinning up bots that were accessing private things they weren’t supposed to access like API keys and the rest because the humans were trusting. Then security vulnerabilities started being discovered especially a blatant prompt injection embedded in one of the skills that was within Clawhub — a hub with specialized skills for these bots.

Vulnerabilities Found:

The findings according to OsMo99, a Twitter user included:

1. The Exposure Crisis

The first discovery was via Shodan, a search engine for internet-connected devices, what was found:

1,009 publicly exposed Openclaw gateways on the internet

What’s at Risk:

This can be traced to users setting their openclaw.json config to bind: all instead of bind:loopback

2. The Localhost Loophole:

This vulnerability is more of a design flaw present in Openclaw’s authentication logic:

if (socket.remoteAddress === '127.0.0.1') {
  return autoApprove();  // Auto-approve localhost connections
}

This code makes a dangerous assumption: if a connection appears to come from localhost (127.0.0.1), it must be trusted and authentication can be bypassed.

The Attack Flow:

  1. Attacker sends malicious request from the internet
  2. Request passes through a reverse proxy (Nginx, Caddy, Cloudflare Tunnel)
  3. Proxy forwards request, making it appear as localhost traffic
  4. Openclaw sees “127.0.0.1” and auto-approves
  5. Authentication bypassed, attacker has full access

The Root Cause:

This means many users who thought they were “securing” their deployments with proxies actually opened a gaping security hole.

3. The Data Heist: What Attackers Can Steal

Once inside a compromised Openclaw instance, attackers gain access to your entire digital identity:

Credentials & Keys:

Conversation Archives:

System Configuration:

Identity & Agency:

The Impact:

Attackers don’t just see what you see, they can act as you, with your full authority, across every platform you’ve connected.

4. The Structural Problem: Security by Design or Insecurity by Necessity?

This wasn’t really a bug, more like a realization that the very features that make AI agents useful create impossible security trade-offs.

The Four Security Paradoxes:

Paradox 1: Broad Access vs. Least Privilege

Paradox 2: Persistent State vs. Sandboxing

Paradox 3: Autonomous Action vs. Human Control

Paradox 4: Cross-Platform vs. Trust Boundaries

The Truth:

These discoveries aren’t patchable bugs or implementation mistakes fixed with a better code review, they are fundamental architectural choices inherent to how AI agents work. You can’t build a useful autonomous agent without giving it access, persistent memory, independent decision-making, and cross-platform integration. But every feature that makes an agent powerful makes it catastrophic when compromised.

Traditional security models assume trust boundaries: your email provider doesn’t have access to your Slack, your Telegram doesn’t know your Discord credentials, your work systems are isolated from personal accounts. AI agents functionalities erase these boundaries in the name of convenience and capability.

What You Should Do

Based on these findings, here’s my advice for anyone using or considering Openclaw:

Immediate Actions If You’re Already Running Openclaw:

  1. Check if you’re exposed: Search for your IP address on Shodan immediately
  2. Rotate ALL credentials: Every API key, token, and password your agent has touched should be considered compromised and changed
  3. Enable authentication: Configure proper gateway authentication mechanisms
  4. Fix proxy configuration: Set gateway.trustedProxies correctly if using reverse proxies
  5. Audit access logs: Check for suspicious activity (though lack of logs is itself a red flag)
  6. Consider shutting down: Until you fully understand the security implications, taking your instance offline may be the safest option

Best Practices for Deployment:

Use a Dedicated Sacrificial Device:

If You Don’t Have a Spare Device:

Network Security:

Credential Management:

Skills and Code:

Model Selection:

Hold Off Entirely If:

Intent Matters

I want to be clear: this is not an attack on Openclaw or its creators. What they’ve built is genuinely impressive and represents important work in making AI agents accessible, the 64,000+ GitHub stars reflect real enthusiasm for this technology, and the problems it aims to solve are legitimate.

The developer community values Openclaw because it demonstrates what’s possible. This is pioneering work, and pioneering work always comes with growing pains.

Security vulnerabilities are inevitable in fast-moving projects, what matters is how we respond. The security research that uncovered these issues was conducted in the spirit of responsible disclosure and community improvement, not to tear down the project.

The path forward requires collective action:

If you’re a user:

If you’re a developer:

If you’re a maintainer:

If you’re a researcher: