Maple AI Proxy + Onyx (OpenCode) Setup Guide

How to get private AI inference on your local-first markdown notes synced via NOSTR using Onyx and MapleAI.
Maple AI Proxy + Onyx (OpenCode) Setup Guide

Overview

I’ve been using MapleAI for a while alongside my typical productivity workflows. I should preface this with a warning - I am not a developer and I am an amateur here with just enough knowledge to really break things.

I was just pulling out my physical notebooks off my shelves when I noticed Onyx hit the scene. I’ve started playing with it lightly to test converting some of my Obsidian MD note taking elsewhere.

What this guide provides:

Local-first markdown notes with Nostr-based encrypted sync, and AI inference running inside Trusted Execution Environments where even Maple can’t see your prompts or notes. No KYC required, no vendor lock-in on the data layer, and the AI traffic is encrypted end-to-end at the hardware level.

Note: I personally recommend using a fresh NOSTR key pair for your Onyx vault as this keeps your vault entirely separate from your daily profile and limits the odds of publishing content you don’t intend to. I’ve also noticed some minor issues or incompatibility with certain models in MapleAI - so this is an early test on this setup.

Sure, you can easily configure your OpenClaw to call MapleAI API and get that running in Onyx - this is an easier approach for those that run a clanker-claw. This was mostly an experiment for me as I noticed Onyx including OpenCode in it’s feature set.


What is Onyx?

Onyx is a private, encrypted note-taking app with Nostr sync.

Onyx lets you write markdown notes locally and sync them securely across devices using the Nostr protocol. Your notes are encrypted with your Nostr keys before being published to relays, ensuring only you can read them.

This is the foundation of what I’m building—a truly open, user-controlled productivity platform where your data belongs to you and syncs through decentralized infrastructure rather than corporate servers.

Read the full article here: https://primal.net/derekross/building-onyx-why-im-creating-an-open-source-alternative-to-big-techs-ai-productivity-tools

GitHub Repository: https://github.com/derekross/onyx

Now go follow and zap @3f770...45b24 for his awesome contributions.

What is MapleAI?

Private AI chat with end-to-end encryption. Your conversations stay yours.

At the heart of our app lies a robust security framework built on the principles of end-to-end encryption, encrypted sync, and encrypted AI. Conversations with AI remain confidential, even from us. They are encrypted on your device using a personal encryption key unique to your user before being sent to Maple servers. At that point, only the secure enclave can interact with your data.

The key is that you can pay with bitcoin and operate an entirely anonymous account.

Read the full article here: https://blog.trymaple.ai/redesigned-maple-ai-private-ai-chat/

MapleAI Proxy Documentation: https://blog.trymaple.ai/maple-proxy-documentation/

Now go follow and zap @7dc38...4066d and @8ea48...20a43 for this product and contribution.


Prerequisites

  1. Onyx installed (includes OpenCode at ~/.opencode/bin/opencode)

    Onyx embeds OpenCode as its AI backend. OpenCode supports any OpenAI-compatible provider via config. Maple Proxy exposes an OpenAI-compatible API locally. The chain: Maple Proxy → OpenCode config → Onyx.

  2. Maple AI desktop app installed with a paid plan (Pro, Team, or Max)

    MapleAI Desktop app has the ability to run a local proxy server and generate unique API keys. You’ll need both of these.


Setup Steps

My instructions are based on how I got this configured on MacOS.

1. Start Maple Proxy

Open the Maple desktop app → Settings → API Management → Local Proxy tab → Start Proxy.

This runs on http://127.0.0.1:8080. Consider enabling “Auto-start proxy when app launches” so it’s easily available.


2. Create the OpenCode config file

Create ~/.config/opencode/opencode.json:

json

{
  "$schema": "https://opencode.ai/config.json",
  "enabled_providers": ["maple"],
  "provider": {
    "maple": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Maple AI",
      "options": {
        "baseURL": "http://127.0.0.1:8080/v1"
      },
      "models": {
        "gpt-oss-120b": { "name": "GPT-OSS 120B" },
        "deepseek-r1-0528": { "name": "DeepSeek R1" },
        "llama-3.3-70b": { "name": "Llama 3.3 70B" },
        "kimi-k2.5": { "name": "Kimi K2.5" },
        "qwen3-vl-30b": { "name": "Qwen3 VL 30B" }
      }
    }
  }
}

Terminal shortcut:

bash

cat > ~/.config/opencode/opencode.json << 'EOF'
{
  "$schema": "https://opencode.ai/config.json",
  "enabled_providers": ["maple"],
  "provider": {
    "maple": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Maple AI",
      "options": {
        "baseURL": "http://127.0.0.1:8080/v1"
      },
      "models": {
        "gpt-oss-120b": { "name": "GPT-OSS 120B" },
        "deepseek-r1-0528": { "name": "DeepSeek R1" },
        "llama-3.3-70b": { "name": "Llama 3.3 70B" },
        "kimi-k2.5": { "name": "Kimi K2.5" },
        "qwen3-vl-30b": { "name": "Qwen3 VL 30B" }
      }
    }
  }
}
EOF

Key details:

  • enabled_providers: ["maple"] ensures only Maple models appear (no OpenAI/Zen clutter)

  • @ai-sdk/openai-compatible is the correct npm package for custom OpenAI-compatible endpoints

  • Do NOT use the built-in openai provider — it loads default OpenAI models and may route differently.


3. Register the API credential

Create/update ~/.local/share/opencode/auth.json:

json

{
  "maple": {
    "type": "api",
    "key": "YOUR-MAPLE-API-KEY"
  }
}

Terminal shortcut:

bash

cat > ~/.local/share/opencode/auth.json << 'EOF'
{
  "maple": {
    "type": "api",
    "key": "YOUR-MAPLE-API-KEY"
  }
}
EOF

Get your API key from Maple desktop app → Settings → API Management. If using the desktop app’s built-in proxy, the key is auto-managed and you can use any string.

I found more success when I had the Maple Proxy running, and used a unique API key here.


4. Restart Onyx

This is critical as OpenCode runs as an embedded server inside Onyx and caches config on startup.

bash

pkill -f opencode

Then fully quit Onyx (Cmd+Q) and relaunch it.


5. Select a Model and Test

In Onyx → Settings → OpenCode → the model dropdown should now show Maple AI models. Select one (e.g., Llama 3.3 70B) and test with a message.

https://i.nostr.build/n19Fj8eumlwezCRb.png

https://i.nostr.build/DeMYE3LJ3ROd8fAp.png


Congrats. 🙌

You should now have a totally private AI assistant running alongside your local-first markdown vault synced over Nostr. Let me know if you try this out, or have any feedback.

Godspeed and stay sovereign.

“Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.” - Edward Snowden


Write a comment
No comments yet.