How I learned to stop worrying and love OpenClaw
I’m building a second brain, and so can you. On the one hand, we’ve been blessed with these strong reasoning models that can call tools and do things for us. On the other hand, we have the context of our lives boxed away in various compartments: text messages, email, chat logs, transcripts, notes, calendar.
And the models are only as good as the context you provide them.
What if we brought them together? You might say:
James, ChatGPT and Claude already have a memory system and can connect to my data sources, right?
Yes, but let me point out a few shortcomings of these existing products:
They haven’t done a good job tying your sources together
The memory system is mid and you don’t have control over how it works
There’s no easy way to migrate your memories between products, i.e., vendor lock-in
Proprietary walled gardens keep it from being truly useful
They can’t yet reach out to you proactively in any meaningful way
OpenClaw fixes all of this mess. And it’s all open source and free as in freedom. I believe this is what Siri was supposed to be, and what the big labs wish they could make.
You text your assistant in the messenger app of your choice, which is the same one you use to text your friends and family. The assistant feels like just another contact. And the assistant can reach out to you. It has real-time knowledge of every data source you expose it to. It knows your life history, whatever you’ve shared with it, and you’d be surprised how many connections it draws across your data.
Your memories are stored in markdown files on your hard drive, which is beautiful simplicity and portability. You can make embeddings to do vector search or hybrid search, whatever you choose. You have full control of it.
And it’s self-improving. You can use Codex CLI, Claude Code, or just ask the assistant directly to modify its own behavior, fix what’s broken, and grow with you. It’s the most exciting thing to happen in personal computing in a long time, and it’s here today.
A prediction
At some point in the not-so-distant future, nearly every person on the planet will have at least one personal assistant. And our assistants will deal with one another, taking action on our behalf. I believe this adoption will follow roughly the same trajectory as smartphones.
At first there will only be early adopters, then more and more, until we reach the point where it’s harder not to have one than to just have one. We got there with smartphones in less than a decade, and I predict we’ll get there with personal assistants in only a few years.
But there be dangers
Yep, there are dangers. You’re giving a nondeterministic beast control of a dedicated machine. It can do anything a normal computer can do, which is its greatest strength and its most obvious risk. And umm, anyway we’re pretty sure these models are only behaving because they know they’re being watched.
That aside, then there’s prompt injection, which is an unsolved problem in the industry right now. I have some thoughts on this and will share more in future writing. It’s a gnarly problem and a fascinating one too.
But let’s also put that aside for now. I will tell you this: the future belongs to those who embrace this technology. Don’t let excuses stop you.
While I fully acknowledge the risks, I believe this is one worth taking. If you’re working in AI or building with AI, this is easily the most important thing to focus on right now. If personal assistants are going to be so ubiquitous, let’s go ahead and dogfood it right now.
And I suggest you start with OpenClaw. Because @steipete has already built the thing. Study it, use it, stop reading about it and just try it. You will see. Think of everything you’ll learn about building agents, memory, the stuff that matters most right now, and you’re getting in at the ground level with full control of all the levers.
Later, you can always build your own assistant from scratch, tailored to your exact use case. OpenClaw is built on top of Pi. I believe it’s worthwhile starting where someone has already built the thing for you.
In other words, I’m pretty sure you can find an approach that is within your comfort zone from a security perspective. I’ll share with you my approach.
Do we gotta pay the Apple tax?
I’ve a separate machine to run my assistant. I ran out and bought a Mac mini. Very trendy, right? Do I have a mind of my own? Yes, well, at least I believe so. Let me explain.
Do you need to also buy a Mac mini to run OpenClaw? No. You can easily run it on any dedicated PC running Linux. But Mac mini is kind of nice because:
If you want to be able to message your assistant with the native messaging app on your iPhone, you’ll need an Apple device
It’s a low-powered device that you can easily run 24x7
Cost is $599, which isn’t bad for Apple
No, I don’t run any meaningful local inference on the Mac mini so the lowest spec model is fine. Any used Apple machine works too. And if you’re not bought in to the Apple ecosystem, then you won’t even have to pay the Apple tax. Just use a Raspberry Pi or whatever.
I like being able to get blue bubble messages from my assistant, I feel this is pretty neat and novel. But you could alternatively easily use Signal, Telegram, WhatsApp. Your choice.
So, with the hardware established here is how I setup my Mac mini:
On an isolated network on my home system
Dedicated Apple ID for the assistant
FileVault on
Firewall on
SIP remains enabled
Keeping SIP enabled, by the way, means we can’t use the Private API with BlueBubbles Server. That unfortunately means we won’t get typing indicators, read receipts, and tapbacks. That’s a bougie texting experience. You can get that with Signal, Telegram, and WhatsApp and along with various levels of support for markdown rendering if you’re into that sort of thing.
Also, let me cook on that SIP requirement. Pretty sure you can run a virtual machine with SIP disabled in the VM and run BlueBubbles Server that way while your host SIP remains enabled. SIP is a pretty good thing to have so I’d rather not disable it just because I want bougie texting. Anyway, basic send and receive works either way, so let’s move on.
The most important thing is do not ever sign in to your new Mac mini with your personal Apple ID. Your assistant should never even have the possibility of accessing:
Your keychain
Apple Wallet
Browser sessions/cookies
It’s a recipe for disaster. Just don’t even give it the possibility of seeing any of these things for any moment at all. So the assistant gets its own Apple ID. And you’ll need a separate account anyway because if you signed in with your personal ID (don’t do that), you would be texting yourself. Things will get weird.
You may actually need not one but two Apple machines, and I’ll explain why in a moment. Yea, that Apple tax is real, man.
Zero public exposure
Ideally you should have zero public exposure unless you know what you’re doing and really need it. There are uses cases for opening up specific services, but if you’re just starting out I highly recommend no inbound ports. SSH should be key-only with passwords disabled, but we can do better.
Everything should route through Tailscale. If you don’t know what that is listen to this perfectly reasonable explanation by @stolinski:
In the end we want this:
No public inbound ports
Use private device-to-device networking only (Tailscale)
Assistant has its own Apple ID on the Mac mini
Personal Apple ID stays off the Mac mini
Personal Apple ID remains only on your personal machine(s)
Use the clanker
You’ll likely want to vibe configure the whole thing. Seriously, just use the clanker. First up, install the Command Line Tools for Xcode:
xcode-select --install Then do Homebrew:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"I would then install Codex CLI or Claude Code and fire it up inside the OpenClaw repository, which you can get via:
git clone https://github.com/openclaw/openclaw.gitThe OpenClaw repository has an AGENTS.md file so should be fine to tell the clanker what you want, and it will have the full context of the current documentation and ground truth in the source code.
Things change quickly, so I recommend you ignore any setup guides other than the docs. I’m making an effort here to explain conceptually what I’ve done, so if this fits for you then you can tell your coding agent that is what you want done.
Or tweak it. Whatever, this is the future of working with software. And wait until you see how you can just modify the system itself using the assistant. It can self-heal or modify itself pretty well especially if you give it the context of how you want it set up, but more on that later.
Texting, texting 1-2-3
There are two separate goals here:
I want to text my assistant from my phone like a normal contact.
I want my assistant to have read-only, real-time access to my personal message history.
And remember, we’re not signing my personal Apple ID into the Mac mini.
So no, you don’t need to buy yet another Mac mini. Any Apple machine you already own works as the personal side. In my case, that’s my MacBook Pro.
The split looks like this:
Mac mini (assistant machine): signed into a dedicated assistant Apple ID, running OpenClaw + BlueBubbles. This is the only place the assistant sends messages from.
MacBook Pro (personal machine): signed into my personal Apple ID, running imsg for read-only access to my own Messages database.
For security, I gave the Mac mini a dedicated SSH key and locked it down on the MacBook Pro with a forced command. That key can only execute a tiny wrapper script that allows read-only imsg commands (chats, history, watch) and denies everything else (send, rpc, shell commands, DB path overrides). The key is also restricted and scoped so only the assistant machine can use it.
I kept host targeting clean with MagicDNS over Tailscale. No public ports, no open SSH to the internet, no personal Apple ID on the assistant machine, and no shell access from assistant machine to personal machine.
One more guardrail: in OpenClaw I deny outbound imessage sends, so even if a tool call goes sideways, the assistant still can’t send through this read-only path. Outbound texting stays BlueBubbles only.
End state is the assistant can read my personal message history in real time, but cannot act as me on my personal machine. That’s exactly the boundary I wanted.
You’ve got (read-only) mail
Now for email. I have a Gmail account. Gross, I know, but it works pretty well actually thanks to the prolific @steipete with gogcli.
I want my assistant to answer questions about my inbox on demand, without giving it permission to send, delete, archive, or modify anything.
I also don’t want to break my zero-public-exposure rule. So for now I skipped Pub/Sub and webhooks. No inbound endpoint, no public callback URL, no extra network surface area. Just pull when asked.
The model here is “boring and safe first”:
On-demand Gmail access only
OAuth scopes locked to read-only
Separate identities stay separate (project owner account can differ from mailbox account)
Assistant can read, summarize, and search; it cannot act as me in Gmail
There was one important identity detail: my Cloud Console account and my real mailbox are different accounts. That’s fine. The Cloud project can live under one Google account, while OAuth authorization is granted by the actual mailbox account that owns the email data.
What done looks like:
I can ask: “What is in my inbox today?” and get the latest inbox messages content back.
Granted scopes are read-only.
No modify/send scopes are granted.
No public inbound ports were added.
Pub/Sub remains optional for later if I want proactive push notifications.
What’s next?
Well, that’s it for now. I’m in my second week of OpenClawing, and I’ve got to admit: once you let your hair down and stop treating the whole thing like a frightening disaster, it’s a lot of fun.
Good lord, half of Twitter will tell you the sky is falling and the other half are hustle bros exalting their 24x7 employee churning out SaaS slop that nobody uses. There is something genuinely exciting happening here, and it’s neither of those things.
The idea of a personal assistant that pulls in all your data sources, knows your context, and grows with you is now a reality. It’s running on a Mac mini in my house right now, texting me blue bubbles.
We’re still early. The rough edges are real, prompt injection is unsolved. But the upside is massive, and the learning curve is the point. If you’re building with AI and you haven’t dogfooded a personal assistant yet, you’re missing out on all the fun.
So go off with your friendly neighborhood clanker. Start small, stay paranoid, and enjoy. I’ll be writing more as I go. Let me know what your thoughts!


