Didactyl — LLM Context
Didactyl — LLM Context
See also: SKILLS.md · TOOLS.md
What Is Context?
Every time Didactyl talks to an LLM, it sends a context — the complete package of information the model needs to reason and respond.
Context is not just a prompt string; it is the full request payload:
- Messages — system/user/assistant/tool history
- Tool schemas — JSON descriptions of callable tools
- Model parameters — model, temperature, max tokens, seed, etc.
OpenAI-Compatible Chat Format
Didactyl uses OpenAI-compatible chat completions.
{
"model": "claude-opus-4.6",
"messages": [
{"role": "system", "content": "..."},
{"role": "user", "content": "..."}
],
"tools": [
{
"type": "function",
"function": {
"name": "nostr_post",
"description": "Publish a Nostr event",
"parameters": {"type":"object"}
}
}
],
"temperature": 0.7,
"max_tokens": 512
}
Message Roles
| Role | Purpose |
|---|---|
system |
Instructions and injected context |
user |
Input message or trigger payload |
assistant |
Model responses / tool call envelopes |
tool |
Tool execution results fed back to model |
Context Assembly Model
Didactyl uses skill composition by adoption order.
There are no context modes.
Assembly Steps
- Load adopted skills from kind
10123. - Resolve adopted skills in list order.
- Expand each skill template variables via tools.
- Append resolved skill output to messages in that same order.
- Append live input (DM text or triggering event payload).
- Attach tool schemas.
- Apply execution parameters from trigger tags (if invoked via trigger).
flowchart TD
INPUT[Input: DM or trigger event] --> ADOPT[Load adopted skills from kind 10123]
ADOPT --> ORDER[Resolve skills in listed order]
ORDER --> EXPAND[Expand template variables via tools]
EXPAND --> MESSAGES[Append resolved skill messages]
MESSAGES --> LIVE[Append live input message/event]
LIVE --> TOOLS[Attach tool schemas]
TOOLS --> PARAMS[Apply runtime params from trigger tags]
PARAMS --> LLM[Send to LLM]
Why Order Matters
- Earlier adopted skills usually establish broad behavior.
- Later adopted skills can refine or narrow behavior.
- If instructions conflict, prompt-order effects apply.
Context Parts
| Part | Source | Description |
|---|---|---|
| Skill templates | Adopted skill events | Core instructions assembled in order |
| Resolved variables | Tool outputs | Runtime data inserted into templates |
| Conversation history | DM history/events | Recent dialogue context |
| Live input | DM or trigger event | Current request payload |
| Tool schemas | Tool registry | Capability declaration for tool calling |
| Runtime params | Trigger tags | LLM/tool limits for this execution |
Template Variables Are Tool Calls
Template variables resolve through tool execution.
Example:
{{admin_profile}}resolves by runningnostr_admin_profile{{admin_notes}}resolves by runningnostr_admin_notes
Unknown variables should resolve to empty values for portability.
Trigger Runtime Parameters
Execution controls are attached to trigger tags, not skill content:
llmmax_tokenstemperatureseedtools
Resolution order for a triggered run:
- Start with agent defaults
- Apply trigger tag overrides
- Execute
- Restore defaults
Triggered vs Adopted Use
- Adopted skill (
10123): contributes context/instructions - Triggered skill: contributes context and may supply execution overrides via tags
This separation keeps composition simple while allowing per-trigger runtime control.
Token Budget
Context cost is controlled by:
- Adoption-list ordering and skill count
- Conversation-history limits
- Skill/template truncation limits
- Per-trigger model/runtime parameter choices
Use runtime context inspection endpoints to see the exact payload before LLM calls.
Write a comment