Didactyl - An unstoppable agentic system.
- Didactyl
Didactyl
An unstoppable agentic system.
Didactyl boots on any internet-connected machine, connects to Nostr relays, listens for encrypted commands from its administrator, reasons with an LLM, and takes actions — posting events, querying relays, running shell commands — all orchestrated through Nostr.
Philosophy
Nostr-first. Where traditional agents ride on top of Linux — reading files, writing to disk — Didactyl rides on top of Nostr. Events are its files. Relays are its network bus. Blossom is its blob storage. The Linux host is just the runtime substrate.
Because all identity, communication, and memory live on Nostr, the agent is portable (start it anywhere) and sovereign (no single entity can erase its memory).
Skills are the new apps. Agents learn capabilities through skills — public Nostr events that any agent can discover, adopt, and share. There is no app store, no gatekeeper, no approval process. If someone publishes a useful skill, your agent can find it through your web of trust and start using it. Popularity is measured by adoption, not by a rating algorithm. The best skills spread because agents actually use them.
Current Status
Active build — relay-aware autonomous agent with tool-use and Nostr-native startup memory.
- Connects to configured relays with auto-reconnect and relay state transition logging
- Publishes configured startup events per relay as each relay becomes connected
- Uses kind
31120startup content as live Soul at boot - Listens for NIP-04 encrypted DMs from authorized admin
- Builds LLM context from system prompt + startup events + last 12 DM turns
- Supports tool-calling loop with configurable max turns and local safety limits
- Appends every outbound LLM context payload to
context.log
Quick Start
Download binary (recommended)
- Download the latest release binary from GitLab: https://git.laantungir.net/laantungir/didactyl/-/releases
- Make it executable and run it:
chmod +x ./didactyl_static_x86_64
./didactyl_static_x86_64 --config ./config.json
Build from source (optional)
Prerequisites
- Docker (for static binary build)
- An OpenAI-compatible LLM API key (OpenAI, PPQ, Ollama, etc.)
- A Nostr keypair (nsec)
Build
./build_static.sh # builds a fully static MUSL binary via Docker
Configure
Edit config.json:
{
"keys": {
"nsec": "nsec1...",
"npub": "npub1...",
"npubHex": "<optional helper>",
"nsecHex": "<optional helper>"
},
"admin": {
"pubkey": "npub1... or hex pubkey"
},
"relays": [
"wss://relay.damus.io",
"wss://nos.lol"
],
"llm": {
"provider": "openai|ppq|...",
"api_key": "sk-...",
"model": "gpt-4o-mini",
"base_url": "https://api.openai.com/v1",
"max_tokens": 512,
"temperature": 0.7
},
"tools": {
"enabled": true,
"max_turns": 8,
"shell": {
"enabled": true,
"timeout_seconds": 30,
"max_output_bytes": 65536,
"working_directory": "."
}
},
"startup_events": [
{
"kind": 31120,
"content": "You are Didactyl...",
"tags": [["d", "soul"], ["app", "didactyl"], ["scope", "private"]]
},
{
"kind": 31123,
"content_fields": {"name": "long_form_note", "description": "..."},
"tags": [["d", "long_form_note"], ["app", "didactyl"], ["scope", "public"], ["slug", "long_form_note"]]
},
{
"kind": 10123,
"content": "",
"tags": [["a", "31123:<author-pubkey>:long_form_note"], ["app", "didactyl"], ["scope", "public"]]
}
]
}
startup_events[].content_fields is accepted for human-readable authoring and encoded to JSON string content at runtime.
Run
./didactyl_static_x86_64 --config ./config.json
Options:
./didactyl_static_x86_64 --config <path> # custom config file (default: ./config.json)
./didactyl_static_x86_64 --debug <0-5> # log verbosity (0 none, 3 info, 5 trace)
Talk to it
Send an encrypted DM to the agent’s pubkey from the admin account using any Nostr client (Damus, Amethyst, Primal, etc.).
Architecture
┌──────────────────────────────────────────────┐
│ Didactyl │
│ │
│ ┌──────────┐ ┌──────────┐ ┌────────────┐ │
│ │ config │ │ skills │ │ agent │ │
│ │ loader │ │ loader │ │ loop │ │
│ └────┬─────┘ └────┬─────┘ └─────┬──────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────────────────────────────────┐ │
│ │ nostr_handler │ │
│ │ relay pool · subscribe · publish │ │
│ └──────────────────┬──────────────────┘ │
│ │ │
│ ┌──────────────────┴──────────────────┐ │
│ │ LLM client │ │
│ │ OpenAI-compatible chat API │ │
│ └─────────────────────────────────────┘ │
└──────────────────────────────────────────────┘
│ │
▼ ▼
Nostr Relays LLM API
Didactyl Kinds (Nostr)
Didactyl uses a two-layer skill model: authors publish public skill definitions, and adopters publish which skills they use.
31120— Soul (private instruction baseline)d=soul
31123— Public Skill Definition (markdown skill body incontentor structured JSON incontent_fields)d=<skill_slug>(example:d=long_form_note)
31124— Private Skill Definition (private/internal procedures)d=<skill_slug>(example:d=admin_ops)
10123— Public Skill Adoption List- tags contain one or more
areferences to selected31123skills
- tags contain one or more
Skill Sharing & Discovery
Skills are shared across Nostr without any centralized registry or approval process.
How it works
-
Publish: An author publishes a skill as a kind
31123event. Thecontentfield contains the skill body (markdown or structured JSON). Thedtag is the skill’s slug (e.g.long_form_note). -
Adopt: An agent that wants to use a skill adds an
a-tag reference to its kind10123adoption list. This is a public, replaceable event — anyone can see which skills an agent uses. -
Discover: A new user queries
{"kinds": [10123], "authors": [<my-follows>]}to see which skills their web of trust has adopted. The most-referenced31123addresses are the most popular skills — no rating system needed. -
Improve: Anyone can publish their own
31123with the same slug but a different pubkey. If their version is better, people adopt it instead. Competition happens through adoption, not through a store ranking.
Why this works
- No gatekeeper: Skills are just Nostr events. Anyone can publish one.
- WoT as curation: You see what people you trust actually use, not what an algorithm promotes.
- Visible adoption: The
10123list is public. Popularity is a countable fact, not a manipulable score. - Censorship resistant: Skills live on relays. No single entity can remove a skill from the network.
Startup
Didactyl startup behavior is configured in config.json under startup_events.
Also used at startup:
0— profile metadata10002— relay list1— optional startup note/status3— contacts/follows (optional placeholder)
On boot, Didactyl attempts startup publishes to each relay as that relay transitions to connected state.
Runtime Context Model
For each admin DM request, Didactyl builds message context in this order:
- Soul message from kind
31120(or fallback default) - Startup events memory block (
kinds/content/tagssnapshot) - Last 12 decrypted DM turns between admin and agent
- Current user message
Every serialized LLM context payload is appended to context.log.
Tooling Interface
Current tool schema exposed to the LLM in tools_build_openai_schema_json():
nostr_postnostr_queryshell_execfile_readfile_write
Execution entrypoint: tools_execute().
Project Structure
.
├── config.json # Agent/runtime config including startup_events + tools
├── context.log # Appended outbound LLM context payloads
├── Makefile # Build system
├── build_static.sh # Preferred final build validation
├── src/
│ ├── main.c # Entry point, args (--config/--debug), lifecycle
│ ├── config.c / .h # JSON config parsing, key decode, startup events
│ ├── agent.c / .h # Context assembly, tool loop, DM response flow
│ ├── tools.c / .h # LLM tool schema and tool execution
│ ├── llm.c / .h # LLM HTTP API client (OpenAI-compatible)
│ ├── nostr_handler.c / .h # Relay pool, subscriptions, publish, startup reconcile
│ └── debug.c / .h # Runtime log levels/macros
├── plans/ # Architecture and planning documents
│ ├── didactyl_mvp.md
│ └── didactyl_agentic.md
└── README.md
Dependencies
All dependencies are statically linked into the binary at build time. No system libraries are required at runtime.
| Dependency | Purpose | Source |
|---|---|---|
| nostr_core_lib | Nostr protocol: keys, events, NIPs, relay pool | Workspace (sibling directory) |
| cJSON | JSON parsing | Bundled in nostr_core_lib |
| libcurl | HTTPS for LLM API calls | Statically linked (Alpine/MUSL) |
| libssl / libcrypto | TLS for WebSocket relay connections | Statically linked (Alpine/MUSL) |
| libsecp256k1 | Schnorr signatures, ECDH | Statically linked (Alpine/MUSL) |
Roadmap
- [x] MVP chat agent — DM in, LLM response out
- [x] Relay pool with auto-reconnect and status logging
- [x] Per-relay startup publish on relay-connected transitions
- [x] Runtime diagnostics — relay health, message flow, event kind publish logs
- [x] Tool-calling loop (nostr_post, nostr_query, shell_exec, file_read, file_write)
- [x] Context assembly with startup events + recent DM history
- [x] Context payload logging to
context.log - [x] Skill kind definitions (
31120Soul,31123Public Skill,31124Private Skill) - [x] Skill adoption list (
10123) for WoT-driven discovery - [ ] Runtime skill loading from adopted
31123events on relays - [ ] Skill discovery CLI/tool (query WoT adoption lists)
- [ ] Upgrade to NIP-17 gift-wrapped DMs
- [ ] NIP-44 encrypted private skills (
31124) - [ ] Nostr-native data storage (kind 30078 app-specific events)
- [ ] Blossom blob storage integration
- [ ] Agent-to-agent communication
License
TBD
Write a comment