Algorithms as Identities: Why NIP-85 Matters

NIP-85 makes each algorithm a Nostr identity. Users follow algorithm-keys, see their outputs, and switch freely.
Algorithms as Identities: Why NIP-85 Matters

Nostr escaped the platform trap. Your identity lives in your keypair, anchored in cryptography. Your notes live on relays you choose, free from servers controlled by people who want to manipulate what you see. One piece of the puzzle remains: algorithmic curation.

Even decentralized networks need ranking. A social protocol drowns in spam, bot networks, and context-free posts from strangers unless it can surface signal over noise. The question is who controls the ranking.

Two architectures have emerged to address this, each suited to different use cases. The first treats algorithms as services: you send a request, a server computes a result, you get an answer. This is the Data Vending Machine model, exemplified by projects like Vertex. It excels at real-time personalized queries where you need recommendations on demand.

The second architecture treats algorithms as Nostr citizens. This is NIP-85, Trusted Assertions. Algorithm providers publish all their scores as signed Nostr events, tens of thousands of them. Every pubkey gets a score, every score is visible, every change is auditable. Users follow an algorithm-key and subscribe to its outputs like any other Nostr feed.

The difference sounds technical. Both models require trusting an operator who runs computation you cannot verify. But they serve different needs and offer different tradeoffs.

DVMs shine at discovery and real-time personalization. You send a request with your pubkey and a target pubkey. The service computes a score based on your specific position in the social graph and returns it immediately. This is valuable for recommendations, for results tailored to your network position, or for computation too expensive to pre-compute for every possible query.

NIP-85 serves a different purpose. The algorithm provider runs their calculations and publishes kind 30382 events to relays. Every pubkey gets a score, and those scores become persistent and queryable. The provider could still lie about what algorithm they’re using. They could still manipulate individual scores. But the outputs exist as Nostr events that anyone can examine, compare across time, and analyze for inconsistencies. You can run the same query against three different algorithm-keys and see how their outputs differ. You can notice if a specific pubkey’s score changed dramatically without obvious cause.

This creates a different kind of accountability through observability, even if it eliminates nothing about the fundamental trust requirement.

The philosophical insight that makes NIP-85 significant: each algorithm becomes an identity you can follow. The algorithm-key is just another npub. Its outputs are just another feed. Your trust relationship with that algorithm works the same way your trust relationship with a human account works: observable over time, comparable to alternatives, and reversible if its behavior changes in ways you find objectionable.

The tradeoffs are real. Vertex, which provides DVM-based web of trust services, has noted that pre-computed scores provide little help with discovery. When you know which pubkeys you’re looking for, you can query NIP-85 assertions. When you’re searching for new accounts, a real-time service can recommend pubkeys based on your social graph in ways a database of pre-computed scores cannot. Processing a hundred thousand assertion events client-side is also computationally expensive, especially on mobile devices.

These are real limitations. NIP-85 is better suited for verification than recommendation. If you already have a pubkey and want to know how trustworthy it is, querying pre-computed assertions works well. For finding new accounts to follow, a DVM makes more sense.

The computational burden is also a choice. Heavy clients can process assertion events locally. Lightweight clients can trust a relay to filter for them. The point is that the data exists in a form that permits verification. Even if most users never audit their algorithm’s outputs, the possibility of auditing changes the incentive structure for algorithm providers.

The case for NIP-85 is that it lowers the barrier to publishing algorithmic perspectives. With a DVM, you need infrastructure: a server running to handle requests, uptime, scaling. With NIP-85, you compute your scores, sign them, publish them to relays, and walk away. Anyone with a laptop and an opinion about trust can become an algorithm provider. The relay handles storage and queries.

This creates a market for algorithms. When outputs are published, providers can be compared. A provider whose scores diverge suspiciously from competitors, or whose rankings correlate with factors unrelated to stated methodology, faces reputational risk. Users can switch to a different algorithm-key, and manipulation becomes harder to hide.

The architecture also enables interesting compositions. A user could follow multiple algorithm-keys and combine their outputs. They could weight recent assertions more heavily than old ones. They could filter for algorithms that other trusted accounts have also chosen to follow. The algorithm layer becomes subject to the same web of trust dynamics as the content layer.

The protocol needs both approaches. DVMs handle real-time personalized recommendations and computationally intensive queries. NIP-85 handles verification, historical comparison, and use cases where observability matters. A client might use a DVM to discover new accounts to follow, then check those accounts against NIP-85 assertions from algorithm-keys it trusts. The two approaches compose naturally.

NIP-85 adds a useful tool to the kit. Algorithms can publish their outputs as protocol data, available for comparison and analysis, while DVMs continue to handle what they do best. Nostr gave users portable identity. Between DVMs and NIP-85, it’s building portable curation too.


Write a comment
No comments yet.