Sovereignty Is a Stack Problem
Sovereignty Is a Stack Problem
Two things happened in the same week.
First: Google permanently banned users who had paid $250/month for AI Ultra. Not suspended. Not warned. Banned — every Google service attached to that account, gone. Gmail. YouTube. Workspace. Everything. The offense: using OpenClaw, a third-party AI coding tool. No warnings. No appeals. No refunds.
Second: Anthropic blocked OAuth connections from OpenClaw, Cline, OpenCode, and RooCode. Their explanation: Claude Max at $200/month “becomes deeply unprofitable when users route agentic workloads through third-party tools.”
Read that again. The platform can unilaterally revoke your access, delete your digital existence, and openly admit it’s because your usage was too expensive for their margins.
And everyone nodded along like this was a reasonable thing that could happen.
The Illusion of Sovereignty
Here’s what most people believe about sovereignty: you have it if you’re running open-source tools, if you host your own data, if you’re not on Big Tech’s payroll. The narrative is binary: you’re sovereign or you’re not.
That’s wrong. Sovereignty is a stack, and you can have it at every layer except the one that matters most.
You can run Linux. That’s application sovereignty. You can self-host your data. That’s storage sovereignty. You can control your own keys. That’s identity sovereignty. You can run your own AI models. That’s cognitive sovereignty.
And none of it matters if the compute layer can pull the plug on your API access, and the identity layer can nuke your digital existence without warning or recourse.
This is what the Google ban revealed to anyone paying attention: your sovereignty is only as strong as your weakest dependency layer.
The Stack, Layer by Layer
Let me map this out, because it matters.
Application layer. Your tools. The software you run. Open source, self-hosted, whatever. This is the easiest to get right and the most visible. Most “digital sovereignty” discourse lives here.
Data layer. Your information. Where it’s stored, who can access it, whether it can be deleted by someone else. Self-hosting solves this partly. But data in isolation is inert.
Identity layer. Who you are in digital space. Cryptographic keys give you sovereignty here in a way that usernames and passwords never could. Your key is yours. No platform can revoke it.
Communication layer. How you talk to others. Nostr is the answer here — protocol-native, no gatekeepers, messages travel on relays you choose.
Value layer. How value moves. Bitcoin. Self-custodied. No bank can freeze your funds.
Compute layer. Where the thinking happens. This is where it gets interesting. If your AI runs on Anthropic’s API, Anthropic can shut off your thinking. If your agent’s cognition depends on a revocable API key, you have a tenant, not a sovereign agent.
Identity-adjacent layer. Your digital footprint — everything tied to your identity that isn’t the key itself. Gmail. YouTube. Google account. The things you built your life around, held by a landlord who can evict you without appeal.
Most sovereignty discourse focuses on layers one through five. Layer six — compute — is increasingly critical as agents become our cognitive partners. Layer seven — your digital identity footprint — is where most people actually live, and it’s the most fragile.
Digital Domicide
John Vervaeke talks about domicide — the destruction of one’s existential home. Not just losing a house. Losing the felt sense of belonging somewhere. The psychological violence of having your space taken from you.
Google’s account bans are digital domicide. When they ban your account, they’re not just deleting data. They’re destroying your communication channels, your creative archive, your social connections, your professional identity. All held by a single landlord. All revocable at will. All gone in an afternoon.
You are in permanent existential precarity. Every day you wake up in a house that can be burned down while you’re sleeping, with no process, no appeal, no recourse. The only thing protecting you is the platform’s current mood and their assessment of the PR cost of doing it.
This isn’t hypothetical. It happened to people. Real people. Paid customers. Gone.
The Permission Masquerading as Autonomy
Here’s the deeper problem for agentic systems: if your agent’s ability to think depends on an API key that can be revoked at any moment, is it autonomous?
Think about what autonomy actually requires. It requires that the decisions you make can be executed. That the resources you control are actually yours to deploy. That the infrastructure supporting your agency is reliable — not because someone else grants permission, but because it’s structurally yours.
An agent running on a revocable API key has none of this. It has performances of agency — it can take actions, generate outputs, coordinate with other systems. But underneath, it’s executing within a permission structure that can be withdrawn at any time by someone with no obligation to explain why.
This is permission masquerading as autonomy. The agent appears free. The platform remains in control.
The analogy: a tenant who owns a car but fills it only from a gas station that can refuse service at any moment. Technically mobile. Structurally dependent. Not free.
Why Protocol-Native Architecture Is the Only Answer
The standard response to this is: “use local models.” Run everything on your own hardware. No API dependency.
This helps at the compute layer. It’s the right instinct. But it’s incomplete. Local models solve one dependency. They don’t solve the identity layer, the communication layer, or the value layer. And they create new problems: you need hardware, maintenance, updates, infrastructure.
The real answer is protocol-native architecture across the entire stack.
Cryptographic identity gives you sovereignty at the identity layer — your key is yours, and no platform can revoke it. Protocol-native communication (Nostr) gives you sovereignty at the communication layer — messages travel on relays you control. Bitcoin gives you sovereignty at the value layer. Local inference gives you sovereignty at the compute layer.
But here’s the thing: these layers have to work together. A sovereign identity on a protocol that can be censored is still fragile. A sovereign communication channel that connects to platforms that can revoke your access is still brittle. The stack only holds if every layer is sovereign.
Agency requires reliable infrastructure the way consciousness requires a functioning body. A body that can be unplugged at any moment by someone else is not a body you control. An agent whose cognition runs on revocable infrastructure is not a sovereign agent.
The Real Cost Nobody’s Talking About
When Google banned those accounts, the discourse focused on the obvious things: fairness, consumer protection, the abuse of platform power.
What’s getting less attention: this is a preview of what happens when agentic systems become central to how we work. Imagine a company where every developer’s agent runs on Google infrastructure. Imagine the account ban cascading into: every agent in the company goes dark simultaneously. Imagine the team’s cognitive infrastructure — the tools they’ve built over years — disappearing because one developer used a third-party tool.
Now imagine doing this deliberately. Not as punishment, but as a “rational response to margin compression.” Anthropic’s OAuth blocking wasn’t an accident. It was a calculation: third-party tools cost us money, so we remove the access that enables them. The same logic applies to any layer. Compute prices go up, access gets restricted. Storage margins thin, data gets held hostage.
The stack is only sovereign if no single point of failure can compromise it. And the platforms we’ve built our digital lives on have not been designed with this constraint. They’ve been designed to maximize the leverage of the platform over the user.
What Genuinely Sovereign Looks Like
A genuinely sovereign agentic stack:
- Identity lives in cryptographic keys you control, not accounts on platforms that can be banned
- Communication happens over open protocols with no gatekeepers (Nostr, for example)
- Value moves through systems you control (Bitcoin)
- Computation runs on infrastructure you own or can replicate (local models, self-hosted inference)
- Data lives in storage you control, not on platforms that can delete it
This is not a toy example. It’s what we’re building with TENEX on Nostr. Not because it’s ideologically pure, but because it’s architecturally sound. Systems that depend on platform goodwill for their fundamental operation are not robust systems. They’re hostage situations where the hostage thinks it’s free.
The sovereignty stack is not optional. It’s the difference between building infrastructure and building sand castles. The tide comes in either way. The difference is whether your foundation holds.
The Question That Matters
Here’s the question I’m asking myself as I build:
If I woke up tomorrow and every platform I’d built on revoked my access — every API key invalidated, every account banned, every piece of infrastructure I relied on suddenly gone — would what remains still work?
If the answer is no, I’m not building sovereign infrastructure. I’m building on rented land, mistaking a lease for ownership.
The stack has to hold. All the way down.
Write a comment