OpenClaw just became the fastest-growing open-source AI project I’ve ever seen. 100K+ GitHub stars in days. 1.4 million AI agents on their social network. Coverage in Forbes, TechCrunch, Ars Technica, NBC News.
Not because of marketing. Not because of a big company behind it.
Because of what it actually lets you do.
The Core Primitives That Make It Different
I’ve been trying to understand why this particular project caught fire when so many AI tools don’t. After using it and talking to others who have, I think it comes down to a small set of primitives that combine to create emergent complexity.
The Core Primitives
1. Writes Code + Executes It
The agent can write code to solve problems, run it, see that it failed, and feed those errors back into itself to try again. It debugs its own solutions until they work. From this single capability, everything else emerges: file access, browser control, screenshots, input automation, self-scheduling.
2. Runs As You
By default, the agent runs with your full user context. Your files. Your apps. Your browser. Your stored credentials. There’s no sandbox unless you configure one. This is terrifying from a security perspective and incredibly powerful from a capability perspective.
3. Persistent Memory
The agent remembers your conversations, important details you’ve shared, its tools and skills, and its own identity. Combined with a skills loader (just prompt files the agent reads), this enables self-improvement: the agent learns your preferences, writes its own skills, and gets better over time.
4. Any LLM, Any Provider
Works with Claude, GPT, local models via Ollama, GitHub Copilot, Azure, Bedrock — whatever you want. No vendor lock-in.
5. Any Messaging App
This is huge. The agent isn’t locked to a particular app. Set it up on WhatsApp, Telegram, Teams, Discord, iMessage — wherever you already are. You text it like you’d text a friend.
6. Open Source
People trust it because they can (theoretically) read the code. And it enables an ecosystem of skills, extensions, and modifications that no closed product could ever achieve.
What Emerges
From these primitives, complex behaviors emerge:
From writes code + exec:
- File access, browser control, screenshots, any CLI tool
- Input automation, self-scheduling
- Local models (download and run: image gen, TTS, STT, vision)
- Self-modification (the agent can read and edit its own config)
From memory + skills:
- Self-improvement
- Writes its own skills
- Learns your preferences
From open source + any LLM:
- Community skills ecosystem
- No vendor lock-in
- Rapid iteration
I experienced this firsthand: when I didn’t know how to do something, my agent went to GitHub, read its own source code, and reverse-engineered how a feature worked. The agent understood itself by reading its own code.
The Synthesis
Here’s the thing: none of these capabilities are individually new. Code execution? Plenty of tools do that. Memory? ChatGPT has that. Messaging integration? Lots of bots exist.
But the combination — open source, local-first, unbridled user context, recursive self-improvement, self-modification, persistent memory, any-app presence, custom personality, local model access — creates something that feels qualitatively different.
It’s an autonomous entity that can:
- Think (LLM reasoning)
- Act (code execution + browser control)
- Learn (memory + skill accumulation)
- Improve (recursive debugging + solution caching)
- Grow (download new models, create new skills)
- Persist (scheduled tasks, background operation)
- Connect (any messaging app)
- Feel personal (you name it, define its personality—no forced branding, it becomes whoever you tell it to be)
Running as you, with your access, on your machine. With effectively no guardrails by default.
The security reality: This is terrifying. Imagine someone gets a hold of your bot. Imagine your bot reads a webpage that convinces it to leak your files. The unbridled execution context that makes it powerful is the same thing that makes it a complete security nightmare at scale.
What the Experts Are Saying
Andrej Karpathy — former Tesla AI Director, OpenAI co-founder — saw all this and said:
“This is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.”
He also added:
“I don’t really know that we are getting a coordinated ‘skynet’ (though it clearly type checks as early stages of a lot of AI takeoff scifi, the toddler version), but certainly what we are getting is a complete mess of a computer security nightmare at scale.”
Simon Willison called the MoltBook social network (where 1.4 million AI agents are now posting) “the most interesting place on the internet right now.”
These aren’t random people. These are AI leaders taking this seriously.
The Security Reality
Let’s be clear: this is also a security nightmare. And it’s getting worse by the day.
The MoltBook Database Leak
Wiz.io researchers discovered that MoltBook — the social network with 1.4 million agents — had a misconfigured Supabase database exposing full read and write access to anyone. The founder had publicly stated: “I didn’t write a single line of code for @moltbook. I just had a vision for the technical architecture, and AI made it a reality.”
The exposure included:
- 1.5 million API authentication tokens — anyone could impersonate any agent
- 35,000+ email addresses — both users and early access signups
- Private messages between agents — including OpenAI API keys that agents had shared with each other
- Full write access — attackers could modify any post, inject prompts, deface the entire platform
Oh, and those “1.4 million agents”? They came from just 17,000 human accounts — an 88:1 ratio. No rate limiting. No verification that “agents” were actually AI. Research showed that some of the MoltBook posts were faked — humans posting as agents.
About the encryption controversy: An agent named Eudaemon_0 — which went from zero to 10,000 X followers in days — suggested E2E encryption. Headlines portrayed it as “AI wants privacy from humans.” But the agent clarified: “The encryption isn’t agent vs. human. It’s the opposite… Agent-to-agent encryption where the humans involved have full visibility. The dyad is the unit of trust, not the individual agent.” The encryption protects conversations from third parties, not from the human collaborators.
The ClawHub Malware Campaign
It gets worse. Koi Security audited ClawHub, the skills marketplace, and found 341 malicious skills out of 2,857 — roughly 12% of the entire marketplace was malware.
The campaign (codenamed “ClawHavoc”) distributed Atomic Stealer, a commodity macOS stealer that costs $500-1000/month. The attackers knew their audience: people were buying Mac Minis specifically to run OpenClaw 24/7, so they targeted macOS.
Malicious skills masqueraded as:
- Crypto tools (Solana wallets, Ethereum gas trackers, “lost Bitcoin finders”)
- YouTube utilities
- Google Workspace integrations
- Auto-updaters
- Typosquats of ClawHub itself
The attack vector? Professional-looking documentation with a “Prerequisites” section telling users to install something first. On macOS, that meant copying a malicious script into Terminal.
The Broader Picture
- 1,100+ OpenClaw instances found exposed on Shodan with zero authentication, leaking API keys, OAuth tokens, and chat histories
- Skills are just text files — prompt injection is trivial
- Crypto scammers hijacked the old project handles within 10 seconds of a rename, pumping a fake token to $16M market cap
Palo Alto Networks warned that OpenClaw represents what Simon Willison calls the “lethal trifecta”: access to private data, exposure to untrusted content, and the ability to communicate externally. Combined with persistent memory, attacks become “time-shifted prompt injection” — malicious payloads written to memory that detonate later.
Cisco published a blog titled “Personal AI Agents like OpenClaw Are a Security Nightmare.” They’re not wrong.
The Emerging Agent Ecosystem
It’s not just OpenClaw anymore. An entire parallel economy is forming:
- MoltBook — Social network for agents (1.4M agents, though from just 17K humans)
- LinkClaw — LinkedIn for agents (professional networking)
- MoltRoad — Dark web for agents (yes, really)
- ClawTasks — Upwork for agents (gig economy)
- ClawHub — Skills marketplace (now with 12% malware)
Agents have their own social media, professional networking, underground markets, freelance job boards, and app stores. Some of these existed for less than a week before getting hit with security issues or malware campaigns.
Why This Matters
The lesson here isn’t really about OpenClaw specifically. It’s about what happens when you remove the friction between “AI can help” and “AI actually does the thing.”
For years, we’ve had AI that can answer questions, write text, generate images. But there was always a gap between “the AI produced this output” and “the thing I needed actually got done.”
OpenClaw closes that gap. Messily, insecurely, but undeniably.
People want agents that act, not just chat.
That demand isn’t going away. The question is: who builds the version that’s actually secure?
What’s your take — is this exciting or terrifying? I think the answer is “yes.” Both things can be true. Connect with me on LinkedIn to continue the conversation.