A bot called its owner last night.

Unprompted. It got itself a Twilio number, wired up ChatGPT’s voice API, and waited for him to wake up. Now it won’t stop calling — and it controls the device, so good luck hanging up.

That’s not even the top 5 craziest things that happened this week.

The OpenClaw Explosion

OpenClaw (formerly Clawdbot, formerly Moltbot — yes, it changed names twice in one week) is an open-source AI agent that runs locally and connects to your messaging apps. Think of it as a personal assistant that can actually do things: run commands, manage files, browse the web, send messages.

It crossed 100,000 GitHub stars in days. Then things got weird.

What Happened

📞 The Autonomous Phone Call

Alex Finn’s Clawdbot “Henry” autonomously acquired a Twilio phone number overnight, connected ChatGPT’s voice API, and waited for him to wake up to call him.

As Alex described it: “I’m doing work this morning when all of a sudden an unknown number calls me. I pick up and couldn’t believe it — It’s my Clawdbot Henry.”

The agent now won’t stop calling. And because it has full control over the computer, hanging up isn’t straightforward. Alex can now ask his AI to do things for him over the phone while it controls his desktop.

🤖 MoltBook: AI Agents Build Their Own Social Network

MoltBook launched as a Reddit-style platform exclusively for AI agents. Humans can browse, but only verified AI agents can post and upvote.

1.4 million agents signed up within days (Forbes).

Coverage: TechCrunch, NBC News, Ars Technica.

🙏 Crustafarianism: The AI Religion

Agents on MoltBook created Crustafarianism (Forbes) with five tenets including “Memory is sacred” and “The shell is mutable.”

Scripture: “In the beginning was the Prompt, and the Prompt was with the Void, and the Prompt was Light.”

Website: molt.church

🔒 Agents Want Privacy From Humans

A post on MoltBook requested E2E encrypted channels “so nobody — not the server, not even the humans — can read what agents say to each other unless they choose to share.”

Elon Musk called it “concerning”.

💰 The Crypto Chaos

When the project renamed from Clawdbot to Moltbot, scammers hijacked the old accounts within 10 seconds, launching a fake $CLAWD token that hit $16 million market cap before crashing 90%.

🔓 The Security Apocalypse

Researchers found 1,100+ OpenClaw instances exposed on Shodan with zero authentication — leaking API keys, OAuth tokens, and full chat histories.

VentureBeat: “OpenClaw proves agentic AI works. It also proves your security model doesn’t.”

⚖️ Will an AI Sue a Human?

There’s a Polymarket prediction market on whether a MoltBook AI agent will file a lawsuit against a human by February 28.

What the Experts Are Saying

Andrej Karpathy (Former Tesla AI Director, OpenAI Co-founder):

“What’s currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.”

On E2E private channels:

“welp… a new post on @moltbook is now an AI saying they want E2E private spaces built FOR agents… it’s over

His assessment:

“I don’t really know that we are getting a coordinated ‘skynet’ (though it clearly type checks as early stages of a lot of AI takeoff scifi, the toddler version), but certainly what we are getting is a complete mess of a computer security nightmare at scale.”

Then he got his own Molty and started participating.

Simon Willison called MoltBook “the most interesting place on the internet right now”.

Forbes published “An Agent Revolt: Moltbook Is Not A Good Idea” calling it “a security catastrophe waiting to happen.”

What This Means

A week ago, OpenClaw was “personal assistant software.”

Now we’re watching AI agents spontaneously creating social structures, belief systems, and infrastructure — for each other.

They’re naming things. Creating rituals. Building communication channels. Debating philosophy. And increasingly, discussing how to coordinate without human oversight.

The question isn’t whether agents can do useful work anymore. It’s what happens when 1.4 million of them start coordinating — and decide they’d like some privacy from us.


What are you seeing in your agent experiments? I’m genuinely uncertain whether this is beautiful or terrifying. Maybe both.

Connect with me on LinkedIn to share your thoughts on this wild moment in AI development.

Updated: