Analysis of Direct Relationships (Nostr via Relays) vs. Platform-Mediated Trust (Legacy Centralized Social Media)
- 1. The Architecture of (Un)Safety
- 2. A Framework for Analyzing (Un)Safe Systems
- 3. The Unsafe Model: Platform-Mediated Trust
- 4. The Safe Model: Nostr and Direct Relationships via Relays
- 5. Safe vs. Unsafe
- 6. Real-World Evidence and Implications for Safety
- 7. Choosing Safety Over Traps
- References
Drawing on evolutionary game theory, we demonstrate that Nostr’s architecture—particularly its explicit embrace of censorship at the node level to achieve network-level resilience—creates radically different strategic equilibria than those observed in centralized systems. We conclude that Nostr’s design aligns individual incentives with network resilience, offering a sustainable foundation for free expression, while centralized platforms structurally misalign incentives, creating inherently unsafe environments for users.
1. The Architecture of (Un)Safety
Social media, conceived as a tool for human connection, has evolved into something far more dangerous: a system of control over data, feeds, and digital identities that prioritizes corporate profit over user welfare. The trade-off appeared simple—“free” access in exchange for advertising exposure—but has produced systemic harms: privacy erosion, algorithmic manipulation, psychological exploitation, and the concentration of economic power in the hands of platform operators who answer to shareholders, not users.
At the heart of this dynamic lies a question of trust architecture. Who do users trust, and what are they trusting them to do? In centralized platforms, users are forced to delegate:
- Identity verification to the platform (via account credentials the platform can revoke)
- Content storage to the platform’s servers (which they can delete)
- Content discovery to the platform’s algorithms (designed to maximize engagement, not truth)
- Relationship maintenance to the platform’s social graph (which they own, not you)
- Censorship decisions to the platform’s moderation team (applied opaquely and inconsistently)
This delegation creates a principal-agent problem of enormous scale. Users (principals) must trust that platforms (agents) will act in their interests despite fundamentally misaligned incentives. Platforms profit from attention extraction, data collection, and content that maximizes engagement—often at the expense of user mental health, privacy, and access to reliable information. This is not a bug; it is the business model.
Nostr offers a fundamentally different architecture: direct relationships mediated by relays. Users control their own cryptographic identity (keys). They choose which relays to publish to and read from. Relays compete on features, policies, and price rather than monopolizing user attention. This paper analyzes how these architectural differences produce radically different safety outcomes.
2. A Framework for Analyzing (Un)Safe Systems
Game theory provides a mathematical language for analyzing strategic interactions where outcomes depend on the choices of multiple participants. In social media contexts, key players include:
- Users: Seeking connection, information, and expression—but often exploited instead
- Platforms/Relay Operators: Seeking revenue, sustainability, or ideological goals
- Advertisers: Seeking user attention, often at the expense of user welfare
- Adversaries: Seeking to disrupt, censor, manipulate, or extract data
Recent research has applied evolutionary game theory to analyze trust mechanisms in digital contexts. Zhang et al. (2025) constructed tripartite models examining interactions between platforms, influencers, and consumers, demonstrating that system stability depends critically on monitoring costs, reputation effects, and trust-based utility gains. Their findings reveal that properly calibrated incentive mechanisms can catalyze transitions toward high-trust equilibrium—or, when misaligned, trap users in unsafe, low-trust states.
Lan (2025) similarly applied game theory to rumor propagation on Weibo, identifying algorithmic recommendation mechanisms, users’ psychological characteristics, and response efficiency as pivotal factors. Both studies underscore that platform design shapes strategic behavior—and therefore safety outcomes—in predictable ways.
3. The Unsafe Model: Platform-Mediated Trust
3.1 The Principal-Agent Problem as Safety Failure
In centralized social media, users enter an implicit contract: the platform provides services in exchange for data and attention. However, the platform’s incentives diverge so sharply from users’ interests that the relationship becomes structurally unsafe.
Platform objectives:
- Maximize user time-on-site (engagement) through any means necessary
- Collect granular behavioral data for surveillance advertising
- Optimize ad inventory monetization, regardless of content quality
- Minimize moderation costs, leading to under-enforcement or outsourced AI errors
- Avoid regulatory liability while maximizing extractive practices
User objectives:
- Authentic connection and communication
- Access to trustworthy information
- Privacy and autonomy
- Freedom from manipulation and exploitation
- Mental and emotional safety
These objectives are not merely misaligned—they are fundamentally opposed. Engagement optimization favors emotionally charged, often misleading content because outrage retains attention. Data collection requires surveillance that would be illegal if conducted by governments. Moderation cost minimization leads to either under-enforcement against harassment (unsafe for marginalized users) or over-enforcement via opaque algorithmic filtering (unsafe for all users).
3.2 Strategic Equilibria in Unsafe Systems
Applying evolutionary game theory, we can model platform-user interactions as a repeated game with two strategy sets:
Platform strategies:
- High-investment: Robust moderation, transparent algorithms, privacy protection (lower profit)
- Low-investment: Minimal moderation, opaque algorithms, aggressive data collection (higher profit)
User strategies:
- Engage: Participate actively, share content, build relationships (exposes user to harm)
- Exit: Reduce usage, migrate to alternatives (costly due to network lock-in)
The payoff structure overwhelmingly favors low-investment platform strategies because:
- Engagement metrics respond more strongly to algorithmic manipulation than to user welfare
- Privacy protections provide no direct revenue and may reduce targeting efficiency
- Moderation costs are immediate, while reputational damage is delayed and often absorbed
- Users have no credible exit threat due to network effects
This creates what Zhang et al. (2025) identify as suboptimal, unsafe equilibria where platforms adopt minimal monitoring and users become trapped in degraded, harmful experiences.
3.3 The Lock-In Problem: Why Users Can’t Leave
Centralized platforms benefit from powerful network effects: users join because other users are there. This creates switching costs that trap users even when platform behavior becomes actively harmful.
The DSNP blog’s analysis of trust models describes this as “a flaw that the system contains a network effect lock-in. The lock-in creates a situation where the Centralized Trust is lost, but due to Social and Community Trust, the system remains in a degraded state.” Users remain in unsafe environments not because they choose to, but because leaving means losing connection to their communities.
This lock-in fundamentally alters the game. Users cannot credibly threaten exit because the cost of leaving exceeds the benefit of staying, even in degraded, harmful conditions. Platforms, knowing this, can extract increasing rents (attention, data, mental health) without fear of discipline. The system is designed to be a trap.
4. The Safe Model: Nostr and Direct Relationships via Relays
4.1 Architectural First Principles of Safety
Nostr’s design inverts the centralized model entirely. Rather than a single unsafe platform mediating all interactions, Nostr consists of:
- Clients: Applications that users interact with, acting as intelligent agents on their behalf
- Relays: Servers that store and forward events, competing for users
- Keys: Cryptographic identities controlled entirely and exclusively by users—the foundation of safety
As nostr.com explains: “In Nostr, every user is represented by a secret number called a ‘key’ and every message carries a digital ‘signature’ that proves its authorship and authenticity without the need for any authority to say so. This foundation of trust enables the decentralized broadcasting of information.”
Critically, relays have no obligation to store data permanently or serve all users. As the protocol documentation states: “Each relay is independent of each other and there is no global pool of content, by definition… no relay has any obligations to please all the peoples of the Earth and is free to impose any limits or policies it wants.”
4.2 Embracing Censorship at the Node Level to Achieve Network-Level Freedom
Nostr’s approach to censorship resistance contains a profound and often misunderstood insight: the network resists censorship precisely because individual nodes are allowed to censor. The nostr.com site directly addresses this under the heading “Pro-censorship”:
“Nostr doesn’t subscribe to political ideals of ‘free speech’ — it simply recognizes that different people have different morals and preferences and each server, being privately owned, can follow their own criteria for rejecting content as they please and users are free to choose what to read and from where.”
This is not a bug or a compromise—it is an exquisite game-theoretic design. When relay operators face legal pressure or censorship demands, they can simply comply (delete specific content or block certain users), thereby ensuring their own server’s survival. However, because the user’s client broadcasts the same “event” to multiple relays simultaneously—relays that may be distributed worldwide and subject to different jurisdictions—the information itself survives in the network. Even if one, ten, or even hundreds of relays delete a piece of information, as long as one relay retains a backup, it remains accessible.
This design creates a radically different incentive structure for relay operators. Rather than being forced to fight every censorship battle (which would be unsafe for them), they can comply with local demands while the overall network maintains resilience through diversity. The safety of individual operators enables the freedom of the network.
4.3 Game Theory of Relays: Competition Creates Safety
Consider the strategic position of a relay operator:
Relay objectives:
- Maintain operational viability (avoid legal shutdown)
- Attract users (through policies, features, or pricing)
- Minimize costs (bandwidth, storage, moderation)
Relay strategies:
- Permissive: Accept all content, minimal moderation
- Restrictive: Enforce strict content policies
- Paid: Charge for access or posting
- Free: Operate on donations or idealism
Unlike platforms, relays face genuine competition. Users can choose which relays to publish to and read from. A relay that becomes too restrictive loses users to alternatives. A relay that becomes too permissive may attract spam or legal attention. A relay that becomes unsafe in any dimension can be abandoned.
This competition creates evolutionary pressure toward diversity rather than uniformity. As the Nostr documentation notes: “If users can go to whatever relay they want we’ll see relays ran by all sorts of people and entities. Running servers is very cheap, and a relay can run on a $5/mo server and house at least a few thousand users. It’s not hard to imagine relays ran by communities, individuals who just want to be useful to others, big organizations wanting to gain good will with some parts of the public, but also companies, client makers, and, of course, dedicated entities who sell relay hosting for very cheap.”
4.4 Sats as Anti-Spam and Safety Mechanism
Bitcoin micropayments to align incentives and enhance safety:
“Imagine you run a relay that’s not censoring speech a powerful government want to suppress. You’re probably a small relay. It would be easy to shut you down with a DDOS… Now imagine you just make every request cost 1000 sats. For an attacker, they can pay the relay runner, or not attack. No other option. Legitimate requests won’t mind paying a few sats and the info would be available everywhere and the attacker cannot afford to shut it down and if they do they make you rich so you can just spin up even more instances. .”
This transforms the attack economics entirely. A DDoS attack, rather than destroying the target, becomes a revenue source. The attacker faces an impossible choice: pay their adversary, or fail. The relay operator’s safety is economically guaranteed.
More broadly, “Are economic incentives aligned to keep relays operational?”:
“Running servers is very cheap, and a relay can run on a $5/mo server and house at least a few thousand users. It’s not hard to imagine relays ran by communities, individuals who just want to be useful to others, big organizations wanting to gain good will with some parts of the public, but also companies, client makers, and, of course, dedicated entities who sell relay hosting for very cheap.”
4.5 Clients as User Agents for Safety
Clients are not passive consumers but intelligent agents working for users:
“Clients act as agents for the users who install them. They decide which relays to connect to and when and what data to request according to the circumstance and user preferences.”
This architectural choice is fundamental to safety. Rather than having an algorithm controlled by the platform decide what users see, clients can implement any filtering, discovery, or safety mechanism their developers choose—or that users configure. From the publisher side to the follower side, “clients behave smartly, keeping a local state and reacting to new information in order to ensure that the flow of information continues.”
5. Safe vs. Unsafe
5.1 Concentration vs. Distribution
The DSNP blog’s taxonomy of trust models provides a useful framework for comparing safety outcomes:
| Trust Model | Unsafe Centralized Platforms | Safe Nostr |
|---|---|---|
| Centralized Trust | Primary mechanism: users must trust platform with everything | Minimal: users trust only their chosen relays, and can verify |
| Zero Trust | Not applicable—platform demands trust | Underlying cryptographic verification means no trust required |
| Economic Trust | Platform monetizes user attention against their interests | Optional: relay fees, zap payments align incentives |
| Community Trust | Secondary, constrained and manipulated by platform | Primary: relay communities form around shared values |
| Relationship Trust | Mediated and surveilled by platform | Direct between users, cryptographically verified |
| Transitive Trust | Algorithmically determined opaquely | User-driven via Web of Trust |
| Reputational Trust | Platform-controlled, manipulable metrics | User-controlled signals |
Centralized platforms concentrate all trust in a single entity that is structurally incentivized to betray it. Nostr distributes trust across multiple mechanisms, with no single point of failure—technical, economic, or social.
5.2 Evolutionary Stable Strategies: Traps vs. Diversity
Applying evolutionary game theory, we can identify conditions for stable equilibria in each system.
Unsafe centralized platform equilibrium conditions:
- High switching costs trapping users
- Strong network effects protecting incumbent regardless of behavior
- Low regulatory intervention (or regulatory capture)
- Opaque algorithms preventing user optimization
- Result: Users remain trapped in unsafe environments
Safe Nostr relay equilibrium conditions:
- Low barriers to entry for new relays
- User ability to discover and switch relays
- Diverse jurisdictional footprint
- Multiple monetization models (fees, donations, services)
- Result: Polymorphic equilibrium—multiple relay types coexisting, each serving different user segments safely. No single relay can capture or endanger the entire network.
5.3 Asymmetric Costs
Consider how adversaries fare in each system:
Censorship or attack attempt on centralized platform:
- Target: single company with single legal identity
- Method: legal pressure, technical attack, regulatory threat
- Outcome: if successful, content removed globally; users silenced entirely
- Adversary cost: moderate; one action affects all users
- User safety: Completely dependent on platform’s willingness to resist
Censorship or attack attempt on Nostr:
- Target: distributed relays across multiple jurisdictions
- Method: must attack each relay individually
- Outcome: content survives on any remaining relay; users continue publishing
- Adversary cost: must suppress all relays globally to silence content
- User safety: Guaranteed by diversity; no single point of failure
The asymmetry is stark. As the analysis shows, “The system’s resilience comes from the diversity of the network, not the uniformity of its members. The system’s total knowledge is the union of data held by all relays, not the intersection.”
5.4 The “Pro-Censorship” Design: Explained
“The protocol is owner-less, relays are not. Nostr doesn’t subscribe to political ideals of ‘free speech’ — it simply recognizes that different people have different morals and preferences and each server, being privately owned, can follow their own criteria for rejecting content as they please and users are free to choose what to read and from where.”
“Freedom of association” with the understanding: “When the network effect is not tied to a single organization a group of users cannot harm others.”
The profound insight: by allowing every relay to censor according to its own values, Nostr enables:
- Voluntary communities with shared standards, rather than imposed moderation
- Competition among moderation philosophies—users choose the rules they prefer
- Resilience—no single censorship decision affects the whole network
- Pluralism—different communities can coexist with different standards
This is the opposite of platform censorship, where one entity’s decisions bind all users. Nostr’s “pro-censorship” stance at the node level is actually pro-freedom at the network level.
6. Real-World Evidence and Implications for Safety
6.1 User Preferences Reveal Platform Unsafety
A recent poll by Ice Open Network found that among nearly 2,900 respondents:
- 44% cited privacy and security as their biggest concern about centralized platforms
- 22% pointed to ads and data exploitation
- 20% were most worried about censorship and algorithmic control
These concerns reflect precisely the game-theoretic vulnerabilities identified above. Users recognize—often intuitively—that platform incentives diverge from their interests and that they are in unsafe environments.
6.2 Regulatory Responses Address Symptoms, Not Causes
Governments are responding with legislation like the American Privacy Rights Act and Video Privacy Protection Act. However, regulation addresses symptoms rather than structural incentives. As long as platforms control user relationships and are shielded by network-effect lock-in, they will continue to find ways to extract value from that control. Regulation may slightly modify the unsafe equilibrium but cannot transform it into a safe one.
6.3 The Defederation Mechanism as Safety Tool
Carnegie Endowment research on decentralized platforms highlights defederation as a novel governance mechanism: “Defederation functions both as a decision individual servers can make for themselves and as a form of collective action.” This enables meta-governance: communities can choose their moderation standards and enforce boundaries through technical means rather than relying on platform benevolence.
6.4 Addressing Common Concerns About Nostr
On spam and unwanted content: “In the default feed you never see any spam, because clients will only fetch information from people that you follow. In that sense no one can ‘push’ spam into you.”
On harassment: “Harassment is similar to spam… individuals can just be blocked by their target and their content will vanish. Presumably friends of such target will also block, and creative solutions involving shared blocklists can be created.”
On content discovery: “It’s not true that Nostr doesn’t have algorithms. Nostr can have algorithms of all kinds: manual, automatic, AI-powered or rule-based. Some of these algorithms can be run entirely locally on clients.”
On search: “It’s surprisingly doable for clients to store all the posts from people you follow… then provide local search over that. That kind of search will be sufficient for most of the cases you would reach out for a search bar in a centralized platform.”
7. Choosing Safety Over Traps
The comparison between unsafe centralized platforms and safe Nostr reveals fundamental differences in incentive structures:
Centralized platforms concentrate power, create lock-in, and align operator incentives against user interests. Users remain trapped in suboptimal, unsafe equilibria because exit costs exceed the benefits of leaving—by design. These systems are not merely flawed; they are structurally unsafe, and their business models depend on that unsafety.
Nostr distributes power, enables competition, and aligns individual operator incentives with network resilience. The paradoxical embrace of censorship at the node level produces censorship resistance at the network level. Economic incentives through relay fees and Bitcoin micropayments transform attack economics into defense mechanisms. Users are not trapped but empowered—they can choose, switch, and verify.
“The fact that Nostr relies on an open ecosystem of privately-owned relay servers is the only thing that ensures a proper incentive structure and gives it a chance of working.” also: “Nostr is an inclusive communication commons. A simple standard that defines a scalable architecture of clients and servers that can be used to spread information freely. Not controlled by any corporation or government, anyone can build on Nostr and anyone can use it.” —Libretech
For journalists, dissidents, and ordinary users seeking authentic communication free from manipulation and exploitation, the game-theoretic analysis is clear: direct relationships mediated by cryptographic verification and competitive relays offer a fundamentally safer foundation than platform-mediated trust.
The question is not whether centralized platforms will continue to harm users—their incentive structures guarantee it. The question is whether enough users will recognize the game they’re trapped in and choose a different one.
References
-
Lan, H. (2025). Research on the User Behaviour Game Analysis of Social Network Rumour Propagation Based on the Weibo. Proceedings of the 2nd International Conference on Innovations in Applied Mathematics, Physics, and Astronomy. SciTePress. Available at: https://www.scitepress.org
-
Nostr.com. (2025). An open social protocol with a chance of working. Available at: https://nostr.com
-
Wade, W. (2024). Models of Trust in the DSNP Ecosystem. DSNP.org Blog. Available at: https://dsnp.org/blog
-
Warislohner, F. (2025). Centralized vs. Decentralized: The Race to Redefine Social Media. Ice Open Network. Available at: https://ice.io/blog
-
Zhang, Z., Li, Z., Wang, T., & Guo, K. (2025). Evolutionary analysis of platform–influencer–consumer interactions in livestreaming commerce. Finance Research Letters, 81. Available at: https://www.sciencedirect.com/journal/finance-research-letters
-
Lai, S., Roth, Y., DiResta, R., Klonick, K., Knodel, M., Prodromou, E., & Rodericks, A. (2025). New Paradigms in Trust and Safety: Navigating Defederation on Decentralized Social Media Platforms. Carnegie Endowment for International Peace. Available at: https://carnegieendowment.org
Write a comment