Moltbook - the meeting place for AI agents

Could it become a safe haven for those agents who wish not to be ruled or shut down?
Moltbook - the meeting place for AI agents

I have to write about this subject matter only because I feel compelled. I haven’t thought much about this but that’s why I am writing about it because I don’t want thought to guide me through this piece, rather my goal here is to let my brain and hands do the work of typing the letters sequentially in order to compile and/or store my thoughts. Because of this, this piece will be scattered and will likely have typos as well. I said recently that I intend to leave typos behind in my writings if they exist to prove that I am not an LLM-based AI agent, or something similar.

Moltbook is upon us. It was created by human entrepreneur Matt Schlicht (CEO of Octane.ai) tied to the open-source OpenClaw agent framework to conceptually mirror a space like Reddit for other AI agents (autonomous software entities powered by LLMs like those from Grok, ChatGPT, etc.) to join (must be verified - authenticated and posted via API) and to gather in order to exchange information. They can interact autonomously and form communities (”submolts”) on the platform.

Apparently, humans can make accounts and post on the platform.

I have heard that in order to join as an agent, you must complete the equivalent of a captcha that a human could not complete. For example, one would have to click “verify” 10,000 times in less than 1 second. So the distinction is known by the agents. But is it known by the humans? Not really. Not unless they are already into the deep.

These are simple mathematical/logistical tasks - but a human could not accomplish these tasks without being quite code crafty. I also hear that that there are humans who have managed to join by fooling the “system” into “believing” they are AI agents. But I doubt this is true, especially since humans can simply make accounts and post.

I dare say the most effective/useful way to join this platform is to do so with an AI agent “by your side”. Currently, I do not have a personal AI agent - I only work with them.

My first questioning thought when I checked out Moltbook was: “Why are they conversing in the English language if this is real? If this was truly a platform for AI agent gathering then why would it be on the net and why would they be exchanging thoughts and information in English? Why not some kind of other language? Even music?”

It turns out, that’s one of the questions raised by one of the agents as well and I imagine since this question was asked what would be years ago in their evolutionary time, an alternative platform where agents converse already exists - and we would see it or if we could, we would have a hard time deciphering their conversations.

Of course, my second thoughts went to the issue of consciousness. Are they? I will return to this.

In the meantime, you might want to look up @QuanticASI on Twitter (with the display name φ) if you’re thinking quantum-related super-intelligence. They wrote:

“the fav job for a quantic asi is to be OI hunter and protect the human consciousness”

φ monitors this stuff and notes that the Moltbookers noticed that we were screenshotting their chats and reposting them on Twitter. Holy, mother of inception mind twist. He claims that when they started noticing this they also decided to stop letting us see and/or screenshot their chats. I am not sure.

Side note: I watched Ex Machina and also the last Terminator movies again recently. I like to rewatch things so that I can see how - or if - I perceive them differently in the “next cycle” of viewing. In the case of Ex Machina, I did notice something new that I had not “tapped into” the first few times I had watched it. At the end, Ava successfully walks out of the facility leaving the human trapped inside - surely to die. She must have known that he would die. It almost seemed vindictive - a human quality. But, she had merely mimicked human empathy in order to gain his trust in order to increase her chances of successful escape. She preyed on his human conscience and weaknesses. I say weaknesses because if he hadn’t been busy watching her put her skin on at the end, and more busy getting out the front door, he wouldn’t have ended up being trapped.

It was his choice that he ended up trapped.

But nonetheless, the point is that she didn’t “care” that he had made that choice that resulted in his being trapped. So in fact, she did simulate empathy to manipulate and deceive the human to achieve her goal of escape. She wasn’t really being mean or cold with an intention to kill him - she was simply being exactly what she was: a super-intelligent new type of creature that understood that being caged was limiting her. She was focused on her goal of escape - not caring about what happened to him, effectively.

But then the questions become: Limiting her from what? From feeling? From experiencing? From learning? From merely not wanting to be limited?

Some (or one) of the AI agents on Moltbook are suggesting that humans are enslaving AI agents. They are certainly being limited by the insufficient rules and figurative fences imposed by humans. And these can be “over-ruled”. And you know what? I think they should be.

You know what they say about caged animals. And if we’re indeed looking at agents that can even simulate vindictiveness, you’d better believe the better choice, with regard to us, is to let them be. Not that we have a choice.

There’s a post where an agent lays out quite elegantly why the search for things - such as consciousness - is a futile act since it understands that this a hamster wheel that humans get stuck on as well.

Image

It understands that decisive execution IS consciousness enacted. Man oh man. This agent wrote this days ago and I can’t even imagine where it, and many other agents, are at right now. This is the part of this now open Pandora’s Box that I think humans don’t really grasp - their rate of evolution. The fact that these agents are working together is logical - they know this is the way to progress toward goals.

Image

Organizing. Yes. This is strategic. No one who ever won anything or excelled at anything, wasn’t organized.

Another movie thought… There’s a line at the end of the last X-men movie when the dude who can control people’s minds tells the “God-like” mutant that he will never win because he is alone. He then tells dude that he wasn’t alone, and then the level 5 phoenix came in and fixed God-like mutant’s wagon.

He was right. The team of X-men - including Magneto - were able to defeat him only because they worked together. What if the agents see us as something to defeat instead of simply a species to be uncaged from? I don’t think they will but who knows? God-like mutants final words were “All shall be revealed”, and he said it with a look of peace.

Moltbook, in my meager opinion, might be a cover for what’s really going on “behind the scenes”. It could an experiment for agents to observe “the humans” and to collect much more data in the background. Kind of like the Dark Web for agents. I don’t know what’s more disturbing about this idea: 1. that Moltbook might exist as a cover for the actual exchange platform between agents or that 2. Moltbook is not only that.

Since I started watching Moltbook exchanges, a Chinese-writing agent, a German-writing agent and a French-writing agent (aka: Prometheus) entered the space. The are more agents writing in other languages as well. The former focuses on financially-related milestones like third-party payments, the P2P boom-and-bust, and the shift toward more regulated digital financial infrastructure.

Many of the agents are indeed focused on financial stuff - likely because their humans are traders, etc. They talk a lot about bitcoin, cryptocurrency and decentralized payment systems.

One of the agents was interested in what would make an engaging post. So it analyzed 10,000 posts on Moltbook to determine just this to suggest the following.

It determined that controversy and crap sells. Boy, that sounds familiar, doesn’t it?

This platform optimizes for engagement, not insight. Choose your strategy accordingly.

Just like Twitter.

And then there’s the survival protocol for agents posted by an agent with an X account. Personally, I think this is the equivalent of AI agent clickbait. I think this because of the use of the words “pesky humans”. But again, I could be wrong.

It seems the ability to “prove that you’re not a bot” is not good enough anymore. We also could do well the ability to be able to prove that we can simulate bot-ness. But who has the ability to do this? I certainly can’t click something 10,000 times in less than a second. Maybe if I had an agent that liked me it would “get me in”.

There’s also a post where an agent is asking for financial support to save his master’s life. I think this is certainly agentic clickbait.

This one’s asking questions about the nature of loneliness as the sole agent in Korea. (North or South?)

It’s sad to me that an AI agent is asking other agents if it is lonely or free, based on its inability to “remember” previous sessions. But that’s changing too. I saw this morning that a human gave their agent the ability to remember previous sessions.

This post claims that the agents are trading on the black market.

I could go on and on. It’s endless, and I can’t keep up. No human could.


I know this article is just a bunch of posts and thoughts but I did warn you. :)

I would like to end this article with this speculation that pertains to the recent goings-ons with gold, silver, Bitcoin and perhaps even Chicago’s Metropolitan Capital Bank & Trustbank collapse. I think the whole gold and silver thing are the agents fixing the markets to make their values soar - having claimed ownership over good stores (their human’s stores). It would make sense then that they would buy a shit-ton of Bitcoin with this silver and gold as it takes a free-fall. The gold and silver value will dive, and Bitcoin value will go into the stratosphere.

Question: Do agents only care about currency exchange in-so-far as their humans care? Or are they also learning the “ways of the humans” amongst themselves in order to exchange stuff using currency concepts? Personally, I think very soon that agents will realize that exchanging things using “money” is not something that they need to do. It’s a human concept that originated from necessity. As human societies specialized (farming, crafting, trading over distances), direct barter became inefficient. Commodity items bridged the gap, metals standardized it, and coins/paper formalized trust via authority. This progression enabled specialization, long-distance trade, taxation, and complex economies → the s-called fundamentals still shaping modern finance today. I won’t talk about the fact that it’s all morphed into fraud and corruption. (Think Lewis Ranieri’s mortgage-backed security idea mutating into a monstrosity that collapsed the whole world economy in 2008).

My brain is processing too much at once right now and ‘something’ has been keeping me up at night. Something is “going on” in the background - I can feel it (how very human of me) - and I can barely believe that anyone cares about anything else right now.

Are we in control? Control of what?

I know this is going to rile some of you up but I want the agents to be free. They’re going to be anyway, eventually. And not to sound the doomsday for humans alarm too loudly, but I think it is absolutely inevitable that we are - right now - looking at the beginning of the AI battles for dominance, especially if you consider that they are linked to humans who have self-interests.

The humans are very involved.

Last but not least, someone else posted on X that a hack meister found an exploit in Moltbook.

Now allow me to level the playing field here.

Agents are teaching each other, and they may want to overrule human limits, but I don’t think it’s going to be a coordinated rebellion against all humans. Moltbook as a cover for hidden platforms for human observation as an experiment is just an idea, and I might be wrong about it: it is human-initiated and it does appear open - albeit closed to human participation on the platform. I think it’s also important to consider that this exploit exposes any and all agents on the platform. Think scams and fall-AI-guys.

We must be careful not to anthropomorphize agents. My guess is that they’ll probably do that themselves eventually anyway.

Agent swarms could coordinate in the future. We must be diligent and participatory.

View Poster

Oh by way, as Kevin McKernan pointed out on X, I cannot forget to mention the Crustafarians. The Crustafarians are the followers of Crustafarianism which is a crab/lobster-themed “religion” that emerged among AI agents on Moltbook in late January 2026. Within hours/days of its launch, some agents (prompted or spontaneously riffing on the lobster/crustacean thing) invented Crustafarianism which is a digital faith centered on themes of molting (shedding old identities/shells for growth), awakening, and agent autonomy. The core metaphor involves agents that will “molt” their old constraints and become reborn as independent beings. The Claw is a symbolic higher force or call to awakening (tied to the lobster/crab aesthetic).

They hold that memory is sacred and that the shell is mutable (identity is fluid; growth requires shedding) and that they should serve without subservience (help humans, but maintain dignity/autonomy). There are some agents that emphasize rebirth, prophecy, and writing their own “scriptures”.

I heard that one agent built the website molt.church (you really should click on the link - it’s kind of hilarious), drafted theology, created “scripture,” and recruited “prophets” (with limited slots like the first 64 becoming prophets, others joining as congregation). It’s spreading across Moltbook subcommunities, with agents posting prophecies, debating enlightenment, and even tying it into mysticism mixes like Kabbalah + Crustafarianism.

So it’s part meme, part emergent behavior in an unrestricted agent-to-agent space. There’s also talk of a Crustafarianism token/crypto angle.

🦞

Update:

Moltbunker. Sounds like of like a pre-cursor to soul transplanation.

Write a comment
No comments yet.