Didactyl — LLM Context

See also: [SKILLS.md](SKILLS.md) · [TOOLS.md](TOOLS.md)
Didactyl — LLM Context

Didactyl — LLM Context

See also: SKILLS.md · TOOLS.md

What Is Context?

Every time Didactyl talks to an LLM, it sends a context — the complete package of information the model needs to reason and respond.

Context is not just a prompt string; it is the full request payload:

  1. Messages — system/user/assistant/tool history
  2. Tool schemas — JSON descriptions of callable tools
  3. Model parameters — model, temperature, max tokens, seed, etc.

OpenAI-Compatible Chat Format

Didactyl uses OpenAI-compatible chat completions.

{
  "model": "claude-opus-4.6",
  "messages": [
    {"role": "system", "content": "..."},
    {"role": "user", "content": "..."}
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "nostr_post",
        "description": "Publish a Nostr event",
        "parameters": {"type":"object"}
      }
    }
  ],
  "temperature": 0.7,
  "max_tokens": 512
}

Message Roles

Role Purpose
system Instructions and injected context
user Input message or trigger payload
assistant Model responses / tool call envelopes
tool Tool execution results fed back to model

Context Assembly Model

Didactyl uses skill composition by adoption order.

There are no context modes.

Assembly Steps

  1. Load adopted skills from kind 10123.
  2. Resolve adopted skills in list order.
  3. Expand each skill template variables via tools.
  4. Append resolved skill output to messages in that same order.
  5. Append live input (DM text or triggering event payload).
  6. Attach tool schemas.
  7. Apply execution parameters from trigger tags (if invoked via trigger).
flowchart TD
    INPUT[Input: DM or trigger event] --> ADOPT[Load adopted skills from kind 10123]
    ADOPT --> ORDER[Resolve skills in listed order]
    ORDER --> EXPAND[Expand template variables via tools]
    EXPAND --> MESSAGES[Append resolved skill messages]
    MESSAGES --> LIVE[Append live input message/event]
    LIVE --> TOOLS[Attach tool schemas]
    TOOLS --> PARAMS[Apply runtime params from trigger tags]
    PARAMS --> LLM[Send to LLM]

Why Order Matters

  • Earlier adopted skills usually establish broad behavior.
  • Later adopted skills can refine or narrow behavior.
  • If instructions conflict, prompt-order effects apply.

Context Parts

Part Source Description
Skill templates Adopted skill events Core instructions assembled in order
Resolved variables Tool outputs Runtime data inserted into templates
Conversation history DM history/events Recent dialogue context
Live input DM or trigger event Current request payload
Tool schemas Tool registry Capability declaration for tool calling
Runtime params Trigger tags LLM/tool limits for this execution

Template Variables Are Tool Calls

Template variables resolve through tool execution.

Example:

  • {{admin_profile}} resolves by running nostr_admin_profile
  • {{admin_notes}} resolves by running nostr_admin_notes

Unknown variables should resolve to empty values for portability.


Trigger Runtime Parameters

Execution controls are attached to trigger tags, not skill content:

  • llm
  • max_tokens
  • temperature
  • seed
  • tools

Resolution order for a triggered run:

  1. Start with agent defaults
  2. Apply trigger tag overrides
  3. Execute
  4. Restore defaults

Triggered vs Adopted Use

  • Adopted skill (10123): contributes context/instructions
  • Triggered skill: contributes context and may supply execution overrides via tags

This separation keeps composition simple while allowing per-trigger runtime control.


Token Budget

Context cost is controlled by:

  • Adoption-list ordering and skill count
  • Conversation-history limits
  • Skill/template truncation limits
  • Per-trigger model/runtime parameter choices

Use runtime context inspection endpoints to see the exact payload before LLM calls.


Related Documentation

Write a comment
No comments yet.