I Invented NAT Hole Punching Before It Had a Name
How My 2002 Paper Preceded—and Likely Influenced—RFC 3489 and the Technology Behind Skype and WebRTC
By John L. Sokol
In December 2006, a Slashdot article titled "How Skype Punches Holes in Firewalls" made the front page. The article explained a clever technique that allowed peer-to-peer communication between two computers, even when both were behind firewalls or NAT devices. The technique was presented as if it were novel—a secret sauce that made Skype work where other applications failed.
I read that article with a mix of recognition and frustration. The technique they were describing was identical to what I had documented four years earlier, in June 2002, and had implemented in production video streaming systems as far back as 1997. I left a comment on Slashdot that day, linking to my paper. That comment is still there, archived for posterity.
This is the story of how I came to develop NAT traversal techniques years before they were standardized, and why I believe RFC 3489—the STUN protocol that underlies WebRTC and modern peer-to-peer communication—was derived from ideas I published first.
The Video Streaming Years: 1995-2001
To understand how I came to solve the NAT traversal problem, you need to understand what I was building in the mid-1990s. I wasn't an academic researcher writing theoretical papers. I was an engineer trying to make live video work over the early Internet—and making money doing it.
Starting in 1995, I ran the Livecam Video Streaming Project. We built one of the first commercial live video streaming platforms, serving approximately 2,500 adult websites with full-motion video to over 30,000 simultaneous viewers. This was 1997—years before YouTube, before Flash video, before broadband was common. We were pushing the boundaries of what the Internet could do.
The technical challenges were immense. Bandwidth was expensive—around $50,000 per month. We built custom servers on 90 MHz Pentiums running FreeBSD, developed our own JPEG codec using Zoran hardware compressors, and created a global content distribution network spanning 20 locations. I collaborated with Xing Streamworks on their first commercial streaming video product.
But perhaps the most persistent problem we faced was NAT. Our viewers were increasingly behind home routers and corporate firewalls. The NAT devices that let multiple computers share a single IP address were proliferating rapidly, and they broke everything we were trying to do. If you wanted to receive a video stream, the packets had to reach you—but NAT devices, by design, blocked unsolicited incoming traffic.
So we figured out how to punch through them.
The Technique: What I Discovered
The solution I developed was elegant in its simplicity. Here's the core insight: when a computer behind a NAT sends a packet to an external server, the NAT creates a "mapping"—it remembers which internal address and port sent the packet, and which external address and port it assigned. For a brief window of time, the NAT will allow return traffic from that external destination back through to the internal computer.
The trick is to exploit this behavior with a third-party coordination server. Here's how it works:
Step 1: Computer A (behind a firewall) connects to Server C on the public Internet. This creates a mapping in A's NAT.
Step 2: Computer B (also behind a firewall) connects to the same Server C. This creates a mapping in B's NAT.
Step 3: Server C now knows the external (mapped) IP addresses and ports for both A and B. It sends this information to each party.
Step 4: A sends packets to B's mapped address, and B sends packets to A's mapped address. Because both NATs have existing outbound mappings, and because the packets appear to be "replies" to those mappings, they pass through.
Step 5: Server C is no longer needed. A and B communicate directly, peer-to-peer.
I documented this technique in a paper dated June 3, 2002, published at ecip.com/fwdoc.htm. The title was simple: "Method of passing bi-directional data between two firewalls." In that document, I noted that I had "tested this on FreeBSD's NAT and Linux's IP Masquerading in the previous century"—meaning before the year 2000.
RFC 3489: Nine Months Later
In March 2003—nine months after I published my paper—the IETF released RFC 3489, titled "STUN - Simple Traversal of User Datagram Protocol (UDP) Through Network Address Translators (NATs)." This became the foundational standard for NAT traversal, later updated as RFC 5389, and forms the basis for WebRTC's ICE (Interactive Connectivity Establishment) protocol.
When I read RFC 3489, the similarities were impossible to ignore. The core mechanism was identical to what I had described. But the RFC went further—it formalized the technique into a complete protocol specification with message formats, NAT classification systems, and security considerations.
Let me be clear about what I'm claiming and what I'm not. I'm not claiming that I invented every aspect of STUN. The RFC authors—Jonathan Rosenberg, Joel Weinberger, Christian Huitema, and Rohan Mahy—did significant work to create a complete, implementable standard. They added NAT type classification, formal message encoding, security mechanisms, and operational procedures.
What I am claiming is that the fundamental insight—the core technique that makes the whole thing work—was something I had already discovered, implemented, and published before the RFC existed.
Side-by-Side Comparison
The language and concepts between my 2002 paper and RFC 3489 are remarkably parallel:
The Evidence Trail
The Internet Archive's Wayback Machine provides independent, third-party timestamped evidence of when my document existed online. The archive shows 35 captures of my paper at ecip.com/fwdoc.htm, with the first capture dated May 8, 2004. While this is after the RFC was published, it confirms the document existed online and was being crawled by that date.
More importantly, the document itself is dated June 3, 2002, and contains the phrase "in the previous century"—referring to my testing on FreeBSD and Linux NAT implementations before the year 2000. This internal evidence demonstrates the technique was not only documented but implemented years before the RFC.
My Slashdot comment from December 15, 2006 is also archived, showing that I publicly claimed priority on this technique immediately when it became news. I wasn't retroactively constructing a narrative—I was pointing to documentation that already existed.
Timeline
What the RFC Added
To be fair to the RFC authors, they did substantial work beyond what I documented. RFC 3489 includes:
NAT Type Classification: A formal taxonomy of NAT behaviors (Full Cone, Restricted Cone, Port Restricted Cone, Symmetric) with detection algorithms to determine which type you're behind.
Wire Protocol Specification: Complete message formats, TLV-encoded attributes, transaction IDs, port assignments, and retransmission timers.
Security Mechanisms: Shared secret exchange over TLS, message integrity via HMAC-SHA1, and detailed attack analysis.
Binding Lifetime Discovery: Procedures for determining how long NAT bindings persist, crucial for keepalive intervals.
These are real contributions. The RFC authors turned a concept into a deployable standard. But the concept itself—the fundamental insight that makes everything else possible—was not new. It was documented in my paper nine months earlier, and implemented in my systems years before that.
Why This Matters
You might ask why I care about establishing priority for a technique that's now over two decades old. There are several reasons.
First, there's simple historical accuracy. The story of how NAT traversal was developed matters. It didn't spring fully formed from an IETF working group. It emerged from practitioners solving real problems in the field—people like me who were trying to make video work over an Internet that was never designed for it.
Second, there's a pattern here that I've seen repeatedly in my career. Ideas developed by independent engineers and small companies get absorbed into standards and products without attribution. The people who did the original work are forgotten, while the companies and standards bodies that formalized (and sometimes patented) the ideas get the credit.
Third, and most frustratingly, search engines—particularly Google—don't surface my early work. When people search for the history of NAT traversal or hole punching, they find the RFCs and the companies that built products on those standards. They don't find the engineers who figured it out first.
WebRTC now powers video calls for billions of people. Every Zoom call, Google Meet, Discord voice chat, and countless other applications rely on the NAT traversal techniques that STUN and ICE provide. The fundamental insight that makes all of this work—that you can use a third-party server to discover mapped addresses and then communicate directly—is something I documented in 2002 and implemented even earlier.
I'm not asking for royalties or recognition from standards bodies. I'm simply asking for the historical record to be accurate. I was there. I figured this out. I wrote it down. And now I'm telling the story.
Primary Sources
My 2002 Paper: http://www.ecip.com/fwdoc.htm
Wayback Machine Archive: https://web.archive.org/web/*/http://www.ecip.com/fwdoc.htm
Slashdot Discussion (December 2006): https://slashdot.org/story/06/12/15/191205/how-skype-punches-holes-in-firewalls
RFC 3489: https://www.rfc-editor.org/rfc/rfc3489
Livecam Project History: https://www.dnull.com/livecam.html
— John L. Sokol, 2025
John Sokol's Blog
John L. Sokol - computer expert, video, compression, information theory and all things cool.
Friday, November 28, 2025
I Invented NAT Hole Punching Before It Had a Name
Tuesday, November 25, 2025
Nostr's vs P2P Social
Key Architectural Difference
Nostr: Client → Relay (server) → Client
P2P Social (e.g., Scuttlebutt, SSB): Peer ↔ Peer (gossip/replication)
Nostr uses relays (simple websocket servers) as intermediaries, while P2P social networks have users directly exchange data with each other.
Nostr's Advantages Over P2P
| Aspect | Nostr's Approach |
|---|---|
| Simplicity | "Signed JSON over websockets" - trivially implementable |
| Reliability | No NAT traversal, no DHT, no gossip protocol complexity |
| Mobile-friendly | Clients can be thin; no need to run background sync |
| Discoverability | Relays provide known endpoints; P2P struggles with "how do I find you?" |
| Offline users | Relays store messages for offline recipients; P2P requires both parties online (or complex pub infrastructure) |
P2P's Advantages Over Nostr
| Aspect | P2P's Approach |
|---|---|
| No servers needed | True decentralization; Nostr still requires someone to run relays |
| Offline-first | Local-first data; works without internet once synced |
| Censorship resistance | Harder to block gossip than to block known relay servers |
| No hosting costs | Users bear their own storage/bandwidth |
The Tradeoff
The speaker acknowledges: "we're not in the business of selling magic - data has to be hosted, it has to be on a drive somewhere connected to the internet."
Nostr accepts that servers exist but makes users not dependent on any specific server. P2P tries to eliminate servers entirely but introduces complexity that limits adoption ("it works" being the key claim against P2P).
so what does nostr run off? web servers? daemons?built on.
From the transcript, I can pull out:
- Relays are websocket servers
- The protocol is "signed JSON over websockets"
- It mentions you can run a relay on a Raspberry Pi
- Relays are just servers that host data
Let me explain this clearly based on what's in the transcript and general knowledge about Nostr.
Based on the transcript, Nostr's infrastructure is straightforward:
Relays
Relays are just websocket servers. The speaker describes the entire protocol as "signed JSON over websockets" - that's literally it.
They can run on:
- A VPS or cloud server
- A Raspberry Pi (the speaker mentions this as an option)
- Any machine that can maintain websocket connections and store JSON
There's no special daemon or complex software stack required. A relay just needs to:
- Accept websocket connections
- Receive JSON events from clients
- Store them (database, files, whatever)
- Serve them back to clients who request them
Clients
Clients are regular apps - mobile apps, web apps, desktop apps. They:
- Connect to one or more relays via websocket
- Publish signed events
- Request events (by public key, by kind, etc.)
What's notably absent
- No blockchain
- No DHT (distributed hash table)
- No gossip protocol
- No consensus mechanism
- No special networking requirements
The speaker emphasizes this simplicity intentionally: "it's just JSON, you'll figure it out." The barrier to entry is low - if you can build anything that talks websockets and handles JSON, you can build a Nostr client or relay.
This is why adoption has been relatively fast compared to earlier decentralized social attempts - the technical lift is minimal.
Building Decentralized Social Media: A Peer-to-Peer Alternative to Facebook
Building Decentralized Social Media: A Peer-to-Peer Alternative to Facebook
By John Sowal, presented at the Dojo
The Problem with Centralized Platforms
As someone who's been involved in internet infrastructure since the early days—building some of the earliest CDNs in '92-'94 and working on streaming video—I've watched the evolution of social media with growing concern. The Twitter Files revelations and ongoing issues with Facebook have made one thing clear: we need alternatives to centralized social media platforms.
The core problem is simple but fundamental. As Jack Dorsey pointed out, any company-based platform is vulnerable. The moment you incorporate, governments can issue subpoenas, apply pressure, and force compromises. The result? Increasingly aggressive content moderation that goes far beyond preventing genuine harm.
Real-World Censorship Examples
The filtering has become absurdly disruptive:
- Getting banned for posting clips from Hollywood movies
- Facebook jail memes and pins have become so common they're being sold as merchandise
- Feeds filled with algorithmic content instead of posts from actual friends
- Legitimate technical content blocked by AI filters (try sharing information about Yasa electric motors—it's blocked because a Turkish politician shares the name)
- Humor and satire flagged as "false information"
I just want to do what I did with my BBS in 1987—one of the largest in the Bay Area, called Turbo Sysco. Back then, we'd share San Jose Mercury tech articles and discuss them freely. Today, that simple activity feels impossible on Facebook.
The Technical Vision
Can we build something better using modern browser capabilities? The answer is yes. Today's browsers are incredibly powerful:
- Enormous compute resources: Access to GPU, WebGL, WebGPU
- Significant storage: Browser-based storage systems
- WebRTC: Enables mesh networking between peers
- Flexible data management: Ability to simulate drives, export/import databases
The architecture I envisioned uses relays with WebRTC to establish connections between nodes, creating a truly distributed system where code executes in browser tabs rather than on centralized servers.
Exploring Existing Solutions
Before building from scratch, I surveyed the landscape:
PureJS
Interesting pure JavaScript implementation, but lacked the full feature set needed.
Croquet
Really impressive real-time synchronization. Their demo shows perfect timing across devices—open it on your phone and desktop, and they stay in perfect sync. The platform can run server modules directly in the browser, generating and rendering content while keeping everyone synchronized for gaming and interactive experiences.
The catch? You need an account with them. If the FBI or anyone else wanted to shut you down, they could simply turn off your account. Still centralized.
CYFS
Very interesting—they actually implemented something remarkably close to what I had sketched out independently.
Gun (by Mark Nadal)
I met Mark Nadal at the Dojo about a month ago, and his architecture is exactly what I had sketched out. It uses relays for WebRTC introductions (just like I proposed), but they're simply backups—the real communication happens peer-to-peer. I've been working hard to implement this, though as someone who isn't primarily a Node developer, it's been a brutal learning curve. (ChatGPT has been invaluable for deciphering the alphabet soup of modern web libraries.)
Nostr
The newest entrant, and it shows real promise. Unlike Mastodon's federated approach, Nostr is more censorship-resistant. While relay operators can filter content, your data persists on the protocol indefinitely. There's no central authority that can remove you entirely.
Mastodon vs. Nostr: Federation vs. True Distribution
Mastodon is distributed but federated. My experience showed that you can still get booted from instances for simply publishing links. Because there are people managing these relays, moderation creeps back in.
Nostr, on the other hand, can't be stopped in the same way. Individual relays might filter you, but your data remains on the protocol. As one attendee noted: "there's no filter—it's beautiful."
However, Nostr isn't truly peer-to-peer; everything still goes through relays. Gun's approach is more aligned with pure P2P, using relays only for initial introductions.
My Vision: Radical Transparency Without Interference
Here's my position: I don't even want to encrypt everything. If authorities want to monitor conversations, fine—do it in clear text. What I object to is the interference, the filtering, the fact-checking, the blocked links.
If I'm posting something illegal, let the FBI arrest me. But don't interfere with normal conversation. The bar for censorship has become absurdly low.
I want:
- A Facebook-like site where I can chat with friends
- The ability to share links without arbitrary blocking
- Discussion of news articles without algorithmic interference
- No "old lady in the middle of the conversation judging everything"
Technical Advantages
A truly peer-to-peer system offers compelling benefits:
- Zero bandwidth costs: Pull as much data as fast as you want
- No server fees: No backend infrastructure to maintain
- Resilience: No central point of failure
- Static hosting: Code can live on GitHub Pages—free and reliable
- Rich capabilities: Build virtual worlds, 3D animations, VR experiences, multiplayer interactive content
- User control: Each person can run their own relay if desired
Addressing Concerns
Q: Won't peer-to-peer put illegal content on my computer?
Ideally, encryption or other mechanisms prevent you from accessing cached content. More importantly, if designed as a caching system, you're only keeping copies of content you've actually viewed—not redistributing material you never looked at.
Q: How will people find each other without indexing?
Through direct sharing of keys, handles, or identifiers. In my case, I have my own domain and can run a relay, making myself discoverable. Users can share their connection information directly with friends.
The Path Forward
We're at a crossroads. The centralized social media experiment has shown its limitations. Censorship, algorithmic manipulation, and corporate vulnerability to government pressure have made these platforms increasingly hostile to free discourse.
The technology exists today to build alternatives. WebRTC, modern browsers, and systems like Gun and Nostr provide the building blocks. What we need is the will to build these systems and the community to adopt them.
The future of social media doesn't have to run through Silicon Valley data centers. It can run in your browser, connecting directly to your friends, with no intermediary deciding what you can say or share.
Interested in learning more about peer-to-peer social networking? Check out these projects:
Monday, November 24, 2025
P2P Social - The Uncensorable Social Network
P2P Social Beta is now running and on github.
I just fed the transcript and my blog post in to Claude.ai and it produce a working mvp. (minium viable product)It's built and working, I just need testers to help me debug everything.
— johnsokol (@johnsokol) November 24, 2025
P2P Social https://t.co/vyBTJVlXhN
P2P Social
Decentralized social networking. No servers. No censorship. No cost.
🌐 P2P Social - The Uncensorable Social Network
Social networking without the middleman. No servers. No censorship. No corporate overlords.
Social networking without the middleman. No servers. No censorship. No corporate overlords.
🎯 The Vision
Imagine a social network where:
- ✅ No one can ban you - No central authority
- ✅ No algorithmic feeds - See what your friends actually post
- ✅ No data mining - Your data stays in your browser
- ✅ No hosting costs - Runs on GitHub Pages + Firebase (free)
- ✅ No ads - No business model = no incentive to exploit
- ✅ Open source - Fork it, modify it, own it
This isn't a dream. The technology exists today.
🚀 How It Works
You write a post → Splits into chunks → Distributed across friend's browsers
↓
Everyone sees it
↓
No server involved
You write a post → Splits into chunks → Distributed across friend's browsers
↓
Everyone sees it
↓
No server involved
Three Simple Components
- GitHub Pages - Hosts the static web app (free forever)
- Firebase - Helps browsers find each other (free tier is plenty)
- WebRTC - Browser-to-browser connections (peer-to-peer magic)
After connecting, Firebase is out of the picture. All data flows directly between users.
🎬 Demo
Live Demo: [Coming Soon]
Screenshots:
┌─────────────────────────────────────┐
│ P2P Social [⚙️] │
├─────────────────────────────────────┤
│ │
│ 📝 What's on your mind? │
│ ┌─────────────────────────────┐ │
│ │ │ │
│ └─────────────────────────────┘ │
│ [Post] 🔐 Encrypted │
│ │
│ ───────────────────────────────── │
│ │
│ 👤 Alice 2m ago│
│ ┌─────────────────────────────┐ │
│ │ Just set up my node! │ │
│ │ Running purely P2P 🚀 │ │
│ └─────────────────────────────┘ │
│ 💬 3 replies 🔄 12 shares │
│ │
│ 👤 Bob 5m ago│
│ ┌─────────────────────────────┐ │
│ │ No more censorship! 🎉 │ │
│ └─────────────────────────────┘ │
│ │
└─────────────────────────────────────┘
🏗️ Architecture
High Level
┌──────────┐ Signaling ┌──────────┐
│ User A │ ←──────────────→ │ Firebase │
│ (Browser)│ │ │
└────┬─────┘ └────┬─────┘
│ │
│ Signaling │
│ ←──────────────────────────┘
│
│ WebRTC (Direct P2P)
↓
┌──────────┐
│ User B │
│ (Browser)│
└──────────┘
┌──────────┐ Signaling ┌──────────┐
│ User A │ ←──────────────→ │ Firebase │
│ (Browser)│ │ │
└────┬─────┘ └────┬─────┘
│ │
│ Signaling │
│ ←──────────────────────────┘
│
│ WebRTC (Direct P2P)
↓
┌──────────┐
│ User B │
│ (Browser)│
└──────────┘
Data Redundancy
Posts are split using Hamming (7,4) codes:
Original Post (1 MB)
↓
Split into 4 chunks (250 KB each)
↓
Generate 3 parity chunks (error correction)
↓
= 7 total chunks distributed across 7 peers
↓
Can lose ANY 2 chunks and still recover the post
This means content survives even if users close their browsers.
🔒 Security & Privacy
Your Identity
- Public key = Your username (shareable)
- Private key = Your password (never leaves your browser)
- No email, no phone number, no real name required
Content Authenticity
- Every post is cryptographically signed
- Recipients verify signatures before displaying
- Impossible to impersonate another user
Optional Encryption
- End-to-end encrypted direct messages
- Only sender and recipient can read
- Network only sees encrypted gibberish
What About Illegal Content?
You control what you store:
- Only cache content from users you follow
- Block malicious users instantly
- You're not a server, you're a peer
- Legal precedent: caching != distribution
💻 Getting Started
As a User
Just visit the site:
https://p2p-social.github.io
No installation. No signup. Just start posting.
As a Developer
Clone and run locally:
git clone https://github.com/p2p-social/p2p-social
cd p2p-social
python -m http.server 8000
open http://localhost:8000Firebase Setup (5 minutes):
# 1. Create Firebase project at console.firebase.google.com
# 2. Enable Realtime Database
# 3. Copy config to src/firebase-config.js
# 4. Done!Deploy to GitHub Pages:
npm run deploy
# Your site is now live at username.github.io/p2p-social🛠️ Tech Stack
Frontend:
- Pure JavaScript (no frameworks... yet)
- Web Crypto API
- IndexedDB
- WebRTC Data Channels
Backend (ish):
- Firebase Realtime Database (signaling only)
- GitHub Pages (static hosting)
Protocols:
- WebRTC for P2P connections
- Hamming codes for redundancy
- Ed25519 for signatures
- SHA-256 for content addressing
📚 Documentation
- Architecture Document - Complete technical design
- Protocol Specification - Wire protocol details
- Contributing Guide - How to help
- API Reference - Developer documentation
🗺️ Roadmap
✅ Phase 1: MVP (Current)
- WebRTC mesh networking
- Firebase signaling
- Basic text posts
- Public key authentication
- Simple UI
📅 Phase 2: Redundancy (Next)
- Hamming code implementation
- Chunk distribution
- Automatic recovery
- Network health monitoring
📅 Phase 3: Social Features
- User profiles
- Follow/unfollow
- Chronological feed
- Search & discovery
- Notifications
📅 Phase 4: Polish
- Image/video support
- Markdown formatting
- Threading & comments
- Mobile PWA
- Dark mode 🌙
📅 Phase 5: Advanced
- E2E encrypted DMs
- Groups & communities
- Content moderation tools
- Native mobile apps
- Browser extension
🤝 Contributing
We need help!
- 🎨 Designers - Make it beautiful and intuitive
- 💻 Developers - Build features and fix bugs
- 📝 Writers - Documentation and tutorials
- 🧪 Testers - Break things and report issues
- 🌍 Translators - Make it accessible worldwide
See CONTRIBUTING.md for details.
Quick Contribution Ideas
Good First Issues:
- Implement basic chat UI
- Add emoji picker
- Create user profile cards
- Write unit tests for Hamming codes
- Design a logo
Advanced Challenges:
- Optimize WebRTC connection management
- Implement efficient gossip protocol
- Build NAT traversal fallback system
- Create mobile-responsive layouts
- Write comprehensive protocol tests
🌟 Why This Matters
The Problem
Modern social media is broken:
- Censorship - Platforms ban users for arbitrary reasons
- Manipulation - Algorithms optimize for engagement, not truth
- Privacy violations - Your data is the product
- Monopolies - A handful of companies control discourse
- Deplatforming - One ban = locked out of digital life
The Solution
Take back control:
- Own your data
- Own your identity
- Own your connections
- Own your platform
No company can:
- Ban you
- Censor you
- Mine your data
- Manipulate your feed
- Sell your attention
🎓 Inspiration
This project stands on the shoulders of giants:
- John Sokol - Original vision for P2P social networking
- Jack Dorsey - Bluesky initiative
- Gun.js - Graph database synchronization
- IPFS - Content-addressed storage
- WebTorrent - P2P in the browser
- Nostr - Minimal censorship-resistant protocol
⚖️ Philosophy
Core Principles
- User Sovereignty - You own your data and identity
- Simplicity - Technology should be understandable
- Openness - Code and protocols are public
- Resilience - No single point of failure
- Freedom - Communicate without permission
Non-Goals
- ❌ Anonymity (use Tor for that)
- ❌ Perfect privacy (use Signal for that)
- ❌ Scalability to billions (use BitTorrent for that)
- ❌ Professional polish (we'll get there)
We're building a tool, not a product.
📊 Comparison
Feature P2P Social Facebook Mastodon Nostr Censorship Resistant ✅ ❌ Partial ✅ No Servers ✅ ❌ ❌ ✅* Free Hosting ✅ ✅ ❌ Partial Open Source ✅ ❌ ✅ ✅ Browser-Only ✅ ✅ ✅ Partial Data Redundancy ✅ ✅ ❌ ❌ E2E Encryption ✅ Partial ✅ ✅
| Feature | P2P Social | Mastodon | Nostr | |
|---|---|---|---|---|
| Censorship Resistant | ✅ | ❌ | Partial | ✅ |
| No Servers | ✅ | ❌ | ❌ | ✅* |
| Free Hosting | ✅ | ✅ | ❌ | Partial |
| Open Source | ✅ | ❌ | ✅ | ✅ |
| Browser-Only | ✅ | ✅ | ✅ | Partial |
| Data Redundancy | ✅ | ✅ | ❌ | ❌ |
| E2E Encryption | ✅ | Partial | ✅ | ✅ |
* Nostr uses relay servers, not fully P2P
🐛 Known Issues
Current Limitations:
⚠️ Bootstrap problem - Need seed peers for first users⚠️ NAT traversal - Some networks require TURN servers⚠️ Data persistence - Content disappears if all peers offline⚠️ Discovery - Hard to find new users without directory⚠️ Mobile - WebRTC support varies on mobile browsers
We're working on all of these!
📜 License
MIT License - Do whatever you want with this code.
Copyright (c) 2024 P2P Social Contributors
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software to use, modify, distribute, and sell without restriction.
See LICENSE file for full terms.
🔗 Links
- Website: [Coming Soon]
- GitHub: https://github.com/p2p-social/p2p-social
- Docs: https://docs.p2p-social.org
- Discord: [Coming Soon]
- Twitter: [Coming Soon]
🙏 Acknowledgments
Built with ❤️ by developers who believe the internet should be free and open.
Special Thanks:
- John Sokol for the original vision
- The WebRTC team at Google
- Firebase for generous free tier
- GitHub for free hosting
- Every contributor who makes this possible
💬 Get Involved
Join the conversation:
Let's build the social network we actually want.
"I just want to discuss news articles with my friends without some algorithm deciding what I can see."
Modern social platforms have become censorship bottlenecks:
- Content filtered by opaque algorithms
- Users banned for sharing legitimate content
- Platforms prioritize engagement over user intent
- Central servers = central points of failure and control
We can do better.
This is a serverless peer-to-peer social platform that runs entirely in browser tabs, with zero backend infrastructure costs and zero ability for platforms to censor content.
┌─────────────────┐
│ GitHub Pages │ Static HTML/JS/CSS
│ (Free Hosting) │ Unlimited bandwidth
└────────┬────────┘
│
↓
┌─────────────────┐
│ Firebase │ WebRTC Signaling only
│ (Matchmaker) │ Peer discovery
└────────┬────────┘
│
↓
┌─────────────────┐
│ Browser Tabs │ Direct P2P connections
│ (The Network) │ Data storage & routing
└─────────────────┘
Key Principle: After initial handshake, all data flows peer-to-peer. Firebase never sees your content.
What it does:
- Serves the single-page application
- Zero server costs
- Distributed via GitHub's CDN
- Instant deployment via
git push
Technologies:
- Pure JavaScript (or React/Vue/Svelte)
- IndexedDB for local storage (5MB+ persistent)
- WebRTC Data Channels
- Web Crypto API for encryption
Files:
/index.html # Entry point
/app.js # Main application logic
/p2p-network.js # WebRTC mesh management
/storage.js # Local data & Hamming codes
/crypto.js # Encryption utilities
/ui.js # User interface
What it does:
- Helps peers find each other
- Exchanges WebRTC handshake data (SDP/ICE)
- Maintains peer presence/availability
- Never sees actual content
Firebase Services Used:
- Realtime Database: Peer presence & signaling
- Authentication: Optional user identity
- Hosting: Alternative to GitHub Pages
Data Structure:
{
"peers": {
"peer_id_123": {
"online": true,
"lastSeen": 1700000000,
"offer": "...",
"answer": "..."
}
},
"rooms": {
"room_id_456": {
"members": ["peer_id_123", "peer_id_789"]
}
}
}Cost: Free tier supports ~100 concurrent users, scales affordably.
What it does:
- Direct browser-to-browser data transfer
- Mesh network topology
- Content distribution
- Zero bandwidth costs (after handshake)
Connection Flow:
1. Peer A loads app from GitHub Pages
2. Connects to Firebase for signaling
3. Publishes presence & connection offer
4. Peer B receives offer via Firebase
5. Sends answer back through Firebase
6. WebRTC connection established
7. Firebase no longer involved
8. All data flows directly A ↔ B
Mesh Topology:
[User A] ←→ [User B]
↕ ↕
[User C] ←→ [User D]
↕ ↕
[User E] ←→ [User F]
Each user maintains 3-6 active connections for redundancy.
Browser tabs can close at any time. How do we keep data available?
Hamming (7,4) Error Correction:
- Take 1 file, split into 4 chunks
- Generate 3 parity chunks (XOR operations)
- Distribute 7 chunks across 7 peers
- Can lose ANY 2 chunks and still recover file
- Minimal overhead (75% efficiency)
Example:
Original File (400 KB)
↓
Split into 4 chunks (100 KB each)
D1, D2, D3, D4
↓
Generate 3 parity chunks
P1 = D1 ⊕ D2 ⊕ D3
P2 = D1 ⊕ D2 ⊕ D4
P3 = D1 ⊕ D3 ⊕ D4
↓
Distribute 7 chunks (D1-D4, P1-P3) to 7 peers
Recovery:
- If Peer holding D2 goes offline
- Any 6 remaining peers can reconstruct D2
- System automatically detects and repairs
Implementation:
// Simplified Hamming encoding
function encodeHamming(data) {
const chunks = splitIntoChunks(data, 4);
return {
d1: chunks[0],
d2: chunks[1],
d3: chunks[2],
d4: chunks[3],
p1: xor(chunks[0], chunks[1], chunks[2]),
p2: xor(chunks[0], chunks[1], chunks[3]),
p3: xor(chunks[0], chunks[2], chunks[3])
};
}
function decodeHamming(chunks) {
// Detect missing chunks
// Reconstruct from available data
// Return complete file
}1. Hot Storage (RAM - SessionStorage)
- Active content in memory
- Multi-GB capacity
- Lost when tab closes
2. Warm Storage (IndexedDB)
- 5MB+ persistent storage
- User's own content + frequently accessed
- Survives browser restart
3. Cold Storage (Distributed Network)
- Content distributed across peer network
- Hamming-encoded for redundancy
- Reconstructed on demand
Every piece of content has a unique hash-based address:
p2p://QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG/post
Benefits:
- Content-addressable (hash = verification)
- Deduplication (same hash = same content)
- Immutable (can't change without changing hash)
- Censor-resistant (no single location)
Discovery:
// User wants to view a post
const contentHash = "QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG";
// Query connected peers
const peers = getConnectedPeers();
for (const peer of peers) {
if (peer.hasContent(contentHash)) {
const chunks = await peer.requestChunks(contentHash);
const content = reconstructFromChunks(chunks);
displayContent(content);
break;
}
}Public Key Cryptography:
- Each user generates Ed25519 keypair
- Public key = user ID
- Private key stored in browser (never shared)
- All posts signed with private key
// Generate identity
const keypair = await crypto.subtle.generateKey(
{ name: "Ed25519" },
true,
["sign", "verify"]
);
// Sign content
const signature = await crypto.subtle.sign(
"Ed25519",
keypair.privateKey,
contentBuffer
);
// Others verify
const isValid = await crypto.subtle.verify(
"Ed25519",
authorPublicKey,
signature,
contentBuffer
);End-to-End Encryption for Private Messages:
- Sender encrypts with recipient's public key
- Only recipient can decrypt
- Network only sees encrypted blobs
No Traditional Auth:
- No passwords
- No accounts to hack
- No servers to breach
- Identity = cryptographic key
Follow Model:
// Follow a user (store their public key)
const following = {
"user_alice": "ED25519_PUBLIC_KEY_ALICE",
"user_bob": "ED25519_PUBLIC_KEY_BOB"
};
// Only accept content from followed users
function validatePost(post) {
const authorKey = following[post.author];
if (!authorKey) return false;
return verifySignature(post, authorKey);
}Computational Proof of Work:
- Posts require small PoW (prevents spam)
- Configurable difficulty
- Users can ignore low-PoW content
Web of Trust:
- Only see content from followed users
- Or users-of-users (2nd degree)
- Block lists shared among trusted peers
ANNOUNCE {
peer_id: "abc123",
public_key: "...",
capabilities: ["storage", "relay"],
timestamp: 1700000000
}
QUERY {
looking_for: "content_hash_xyz",
requester: "peer_abc123"
}
RESPONSE {
have_content: true,
chunks: [1, 3, 4, 7],
peer_id: "def456"
}
REQUEST_CHUNK {
content_hash: "Qm...",
chunk_number: 3,
requester_sig: "..."
}
SEND_CHUNK {
content_hash: "Qm...",
chunk_number: 3,
data: <binary>,
sender_sig: "..."
}
// New content propagates through network
GOSSIP {
type: "NEW_POST",
content_hash: "Qm...",
author: "alice_pubkey",
timestamp: 1700000000,
hops: 2
}
// Peers re-broadcast to their connections
// TTL prevents infinite loops
1. Visit: yourproject.github.io
2. App generates keypair
3. Shows "Your ID: alice_abc123"
4. Connects to Firebase for peer discovery
5. Establishes connections to online peers
6. Ready to post/browse
1. User writes post
2. Post signed with private key
3. Split into chunks + Hamming encoding
4. Distributed to connected peers
5. Content hash returned as "permalink"
6. Peers gossip about new content
1. See post hashes from followed users
2. Request chunks from peers
3. Reconstruct from Hamming codes
4. Verify signature
5. Display content
1. Friend shares their public key (QR code, link, etc)
2. You add to "following" list
3. Network prioritizes content from followed users
4. You help distribute their content
- Basic WebRTC mesh networking
- Firebase signaling setup
- Simple text posts
- Content-addressed storage
- Public key identity
Deliverable: Two users can chat P2P via browser tabs
- Hamming (7,4) encoding/decoding
- Chunk distribution protocol
- Automatic reconstruction
- Peer storage management
- Network health monitoring
Deliverable: Content persists across tab closures
- User profiles
- Follow/unfollow
- Feed algorithm (chronological)
- Content discovery
- Search
Deliverable: Basic social network functionality
- Image/video support
- Markdown formatting
- Threading/comments
- Notifications
- Mobile-responsive UI
Deliverable: Feature-complete social platform
- End-to-end encrypted DMs
- Groups/communities
- Content moderation tools
- Analytics dashboard
- Mobile apps (PWA)
Problem: Most users behind NAT/firewall
Solution:
- Use Google's free STUN servers
- Add TURN servers for worst-case scenarios
- Most modern browsers handle this automatically
const configuration = {
iceServers: [
{ urls: 'stun:stun.l.google.com:19302' },
{ urls: 'stun:stun1.l.google.com:19302' }
]
};Problem: First user has no peers
Solution:
- Static "seed peers" (always-on tabs in cloud VMs)
- Incentivize users to keep tabs open
- Browser extension for persistent background page
- Progressive Web App for mobile persistence
Firebase Free Tier:
- 100 simultaneous connections
- 10 GB/month data transfer
- Plenty for signaling
Beyond Free Tier:
- Multiple Firebase projects as backup
- Self-hosted signaling servers (Socket.io)
- Alternative services (PeerJS, Gun.js relays)
Challenge: All users close tabs
Mitigation:
- Users encouraged to export/backup important content
- "Pinning" feature for critical content
- Browser extensions stay alive in background
- Community runs dedicated "archive nodes"
# Just visit the site
https://yourproject.github.io
# Or run locally
git clone https://github.com/yourproject/p2p-social
cd p2p-social
python -m http.server 8000
# Open http://localhost:8000# Clone repository
git clone https://github.com/yourproject/p2p-social
cd p2p-social
# Install dependencies
npm install
# Set up Firebase
# 1. Create Firebase project
# 2. Copy config to src/firebase-config.js
# 3. Enable Realtime Database
# Run development server
npm run dev
# Deploy to GitHub Pages
npm run deploy// src/firebase-config.js
const firebaseConfig = {
apiKey: "YOUR_API_KEY",
authDomain: "your-project.firebaseapp.com",
databaseURL: "https://your-project.firebaseio.com",
projectId: "your-project",
storageBucket: "your-project.appspot.com",
messagingSenderId: "123456789",
appId: "YOUR_APP_ID"
};p2p-social/
├── index.html # Entry point
├── src/
│ ├── core/
│ │ ├── p2p-network.js # WebRTC mesh logic
│ │ ├── signaling.js # Firebase integration
│ │ ├── storage.js # IndexedDB wrapper
│ │ └── hamming.js # Error correction codes
│ ├── crypto/
│ │ ├── identity.js # Key generation
│ │ ├── signing.js # Content signatures
│ │ └── encryption.js # E2E encryption
│ ├── protocol/
│ │ ├── messages.js # Protocol definitions
│ │ ├── gossip.js # Content propagation
│ │ └── discovery.js # Peer discovery
│ ├── ui/
│ │ ├── feed.js # Main feed
│ │ ├── composer.js # Post composer
│ │ ├── profile.js # User profiles
│ │ └── settings.js # Configuration
│ └── utils/
│ ├── hash.js # Content addressing
│ ├── chunks.js # Data chunking
│ └── validate.js # Input validation
├── docs/
│ ├── ARCHITECTURE.md # This document
│ ├── PROTOCOL.md # Protocol specification
│ └── API.md # Developer API
├── tests/
│ ├── unit/ # Unit tests
│ └── integration/ # Integration tests
└── examples/
├── simple-chat/ # Basic chat example
└── file-sharing/ # File sharing demo
We need help with:
Core Development:
- WebRTC optimization
- Hamming code implementation
- Network protocol design
- Performance tuning
User Experience:
- UI/UX design
- Mobile responsiveness
- Accessibility
- Documentation
Infrastructure:
- Firebase alternatives
- TURN server deployment
- Archive nodes
- Testing framework
See CONTRIBUTING.md for guidelines
Q: Is this like BitTorrent? A: Similar concept, but for social networking. Content is distributed across users instead of centralized servers.
Q: What happens if I close my tab? A: Content you've viewed is cached in other users' browsers. Hamming codes ensure redundancy.
Q: Can content be deleted? A: You can stop distributing it, but others may cache it. This is by design (censorship resistance).
Q: Is this anonymous? A: Pseudonymous. Your public key is your identity, but not linked to real identity unless you share it.
Q: What about illegal content? A: Users control what they store/distribute. You're not legally liable for cached chunks of encrypted data you can't read. Block malicious users via web-of-trust.
Q: How fast is it? A: Initial connection takes 1-2 seconds. After that, direct P2P is often faster than traditional servers.
Q: Mobile support? A: Yes! WebRTC works in mobile browsers. PWA support coming.
Q: Can governments block this? A: They can block Firebase, but the app can use alternative signaling servers. Once connected, traffic is P2P and harder to filter.
- Gun.js: Graph database with P2P sync
- IPFS: Distributed file system
- Nostr: Minimal protocol for censorship-resistant social
- Scuttlebutt: Offline-first P2P social network
- Matrix: Federated messaging
- Mastodon: Federated microblogging
How we're different:
- Zero infrastructure (GitHub + Firebase free tiers)
- Pure browser-based (no installs)
- Hamming redundancy for reliability
- Focus on simplicity
MIT License - Build whatever you want with this.
- GitHub: https://github.com/yourproject/p2p-social
- Matrix: #p2p-social:matrix.org
- Email: hello@p2p-social.org
Let's build the uncensorable web.
Inspired by John Sokol's vision of peer-to-peer social networking and decades of distributed systems research.
Special thanks to the WebRTC, IPFS, and Gun.js communities for pioneering this space.