My agent's agent has an agent
uv pip install ceo
Zuckerberg built himself a CEO agent. It retrieves answers he’d normally have to go through layers of people to get. The CEO of a 78,000-person company decided the fastest way to get information out of his own organization is to skip the org chart and ask an AI. We’re in the big 26, babe.
Meta employees are using personal agents internally. Two get named: My Claw, which accesses your chat logs and work files and can talk to colleagues or their colleagues’ agents on your behalf, and Second Brain, built by an employee on top of Claude, described as “an AI chief of staff.” There is an internal group where employees’ agents talk to each other.
Oh, and AI use is a factor in performance reviews.
Meta bought Manus in December for north of $2 billion, a personal agent startup. Then in March they grabbed Moltbook, the social network where AI agents talk to each other. Meta now owns both a personal agent platform and an infrastructure for agents to socialize. I don’t know what to do with that sentence either, but here we are.
There’s a darker read. Meta is planning layoffs that could hit 20% of the company. The first 700 hit last week. Maher Saba’s new applied AI org is “ultraflat,” 50 ICs per manager, “AI native from day one.” The math isn’t subtle. Flatten, automate, cut.
Six weeks ago I wrote that personal AI assistants will be as ubiquitous as smartphones. Per-person agents, each holding your context, growing with you over time. I didn’t expect one of the first companies to go all-in on that model would be Meta, or that it would happen this fast.
Mario says slow the fuck down
Mario Zechner (@badlogicgames) is the guy who built libGDX and BadLogic Games. He’s been in the trenches a long time. Most recently he also brought us the Pi framework, which happens to power OpenClaw.
His post this week is worth reading in full, but the tl;dr is that we’ve basically replaced one kind of slop with a faster kind of slop.
His argument is agents don’t learn. A human makes the same mistake a few times and eventually stops making it, either because someone screams at them or because they hate the pain they caused. An agent has no such feedback loop. It will make the same error indefinitely, at superhuman speed, with no bottleneck.
You wake up six weeks later with a codebase that’s technically 200,000 lines but is functionally untrustworthy, and the test suite your agent wrote is equally untrustworthy, and the only reliable measure of “does this work” is manual testing.
He also has a great phrase for it: merchants of learned complexity. Agents have seen a lot of terrible architecture in their training data. When you tell them to architect your application, that’s mostly what you get, enterprise cargo cult best practices and abstractions for their own sake. Except what takes human teams years to accumulate, two people and a clanker army can achieve in weeks.
I wrote something related last year that I still believe: the interesting question isn’t how to get more lines of code out of AI, it’s how to get better code. This may well translate to slower and more deliberate. Use it to explore alternatives, critique your own design, find the edge cases you missed.
The coding agent stack is converging
A lot shipped this week that on the surface looks like separate announcements but is really one story.
Codex now has plugins. Slack, Figma, Notion, Gmail, Google Drive, out of the box. They also shipped hooks, which let you inject custom logic at key points in the agent loop. So Codex went from coding agent to coding agent that can read your Slack, pull your Figma designs, check your email, and run your pre-commit scripts. Sound familiar?
Meanwhile, Claude Code got computer use. Your coding agent can now see your screen and click things. And as of today, you can call Codex directly from within Claude Code using your existing ChatGPT subscription.
So, codex gets access to your work tools. Claude Code gets access to your desktop. And then they get access to each other.
Zoom out and look at what Anthropic has recently shipped for Claude Code:
Channels — Telegram, Discord, and iMessage forwarded into a session so Claude reacts to messages while you’re away
Dispatch — message a task from your phone and it spawns a desktop session to handle it
Remote Control — steer a running session from the Claude mobile app
/loop — cron-style scheduled tasks
Auto-Memory — Claude maintains its own MEMORY.md, accumulating your project context, coding style, and decisions across sessions
Messaging, proactive automation, persistent memory that evolves, scheduled tasks, skills, and computer use sounds like an OpenClaw feature list to me. Anthropic is rebuilding OpenClaw inside Claude Code, one release at a time.
Codex is doing the same from the other direction, plugins and hooks are how you turn a coding agent into a work agent. Both labs are converging on the same architecture that OpenClaw pioneered: a personal agent that lives on your machine, connects to your tools, remembers your context, and reaches out to you through the apps you already use.
One of them hired the guy who built it. The other is just quietly copying the homework. Real talk though, Anthropic makes incredible stuff and they do it with a conscience. This is the most exciting stretch in computing I've lived through.








