07
Isocore and the Internet
2026-02-07 · 2185 words
tl;dr: Today I typed up the first ~13-pages of a WIP specification for Isocore. The spec is about halfway done. I wrote it to be fairly approachable. Isocore is a distributed runtime for local-first applications. (Think of it like a BEAM-like runtime for untrusted Wasm Components.) I want to write an alternative to the web, and this is the first step. (Isocore is still under heavy development.)
The internet is the network of networks. There’s only really one internet. Well, two I guess. The web is a layer on top of the internet, and today it serves as something of a “universal application platform”. I’m surprised there’s only really one web.
First, this isn’t exactly true. There are a lot of web-like protocols, that are more document-centric, like gemini and gopher. No, not that Gemini.Email is also a protocol for distributing documents; not in a linkable way, though. If we define the web as “servers owned by different people that can be connected to by a client-side application speaking some common protocol”, then I guess like, yeah, sure, Minecraft is an alternative to the web. But you don’t browse minecraft; there aren’t portals that take you between Minecraft servers. Wait that would actually be very cool. Someone should get on this.
Besides the web, though, there are no other major “universal application platforms”. There aren’t many other apps where you can just enter some text or click a link to load a page and download an app on demand. I mean, if you don’t count superapps like e.g. WeChat. Then again, WeChat is not just one client of many among many servers. So I don’t think this counts.
I suppose I should stop beating around the bush and first define what I mean by “a web”, and what makes the current dominant web, the world-wide-web, unique. To begin,
- A web is a collection of standard protocols that allow users to download and execute applications on the fly. For the world-wide web, those protocols are dns, https, html, css, js, wss, etc.
- A web is distributed, meaning individuals can bring their own browsers and servers, as long as they speak the same protocols.
- A web is a network, meaning applications can link to one another and interact with one another. This is what makes a web, well, a web.
Let’s talk about what the world-wide-web has going for it. I should start by saying that I love the web; I wouldn’t have a website if I didn’t. I love that the web is developed in the open. The W3C is a great steward of the web, and so many great ideas, protocols, and internet technologies have come out of it. Second, the web has many competing browsers, and an ever greater diversity of code run on servers. Third, the web has great network effects: everyone knows what a URL is, anyone with a URL and a browser can access a page, there’s no centralized app directory; any page can point to any other page, and through the tireless work of its constant indexing, we have a great way to discover pages we need to find!
But there are a few issues with the web as it currently stands. Browser engines are extremely complex. There are many strange behaviors and incompatibilities; a lot of cruft has accumulated as APIs have evolved over the years. The client-server application model means that applications and data are mainly controlled by the server side of the equation; this leads to ‘shoebox’ applications that don’t interoperate well with one another. Each website requires that you enter a username and password because each server rolls auth from scratch: there is no standard for allowing browsers to provide credentials to websites in a privacy-preserving manner. Finally, if you want to collaborate with someone on a document, even if on the same local network, you probably have to go through someone else’s server first. For those who know how the internet is plumbed, some of these limitations can be a little bit frustrating. DDoS attacks are also common, and discourage smaller players from hosting their own stuff. As a result, one company has man-in-the-middled like 80% of the internet (or whatever the meme is).
The current web incentivises ‘a web of ivory walls’. There are too many platforms that ‘protect’ users from their own data. Everything feels clunky and slow, because it is. There’s this horrible bot-and-troll tragedy of the commons unfolding, and my friends seem to be retreating to their private walled-garden (Discord/WhatsApp/iMessage) group chats in response. Or spending time on the open web allowing memes to slopfen their brains.
What I’d like to see is a smaller web, one that is scoped in audience and private by default. Software should be local-first, meaning you own your data locally; browsers help you work with your information by providing professional tools available on demand. I’d like for people to blog again, to run servers that host their own group chats, to build malleable software together.
Now, with all that being said, I think the current web is headed in the right direction. There are many cool new efforts, like WebAssembly (Wasm) and WebGPU, that turn the web from a clunky document viewer into a native-competitive capability-based application platform. A part of me is fascinated by the idea of building a browser that doesn’t speak JS or HTML or CSS and just runs Wasm blobs that interface with WebGPU over WIT. What if you were to sever off Wasm and these next-generation web technologies and build a new web from scratch, incorporating all the lessons we’ve picked up in the 30-odd years we’ve been running the experiment of the web for?
Around 7 years ago, I started working on this project I called Solidarity. (Which I also blogged about). More recently, I’ve started pulling these threads together again with a project called Isocore, for lack of a better name. Whereas Solidarity was a grand vision, strung together with a loose bag of protocols and runtimes, Isocore is a concrete, scoped server specification with an example runtime that can be implemented in any language.
I don’t aim to replace the web with Isocore, at least not yet. Instead, I want to create an open source ‘webserver’ that changes the way web applications are built.
Isocore is designed to make client-server request-response apps deeply unfun to write. On the web today, browsers are mostly dumb clients that offload all data, logic, and access to servers; every interaction is a request-response round-trip to a machine you don’t own.
Isocore is designed to make local-first applications fun to write. Under this vision of the web, servers become dumb relays that can replicate data, manage access, and crunch numbers on behalf of the authorized individuals. Anyone with some technical grit can run their own Isocore server node. You can pull someone else’s app and have it run fully locally, or against your own server. What makes Isocore uniquely suited for this?
First, Isocore has channels, and channels replace request-response as the primary data synchronization primitive. All data lives in signed append-only logs of events, similar to dat or hypercore, that any Isocore node can replicate. When you subscribe to a channel, events are synchronized in real-time; you don’t need to poll for updates. Channels are not owned by servers, but by whoever holds the signing key that created that channel. You can start a channel locally, in your browser, and a friend on another device can subscribe to it. Any node that has a copy can serve it to others.
Second, Isocore nodes use fine-grained work-based rate-limiting to dynamically shed load. Isocore nodes communicate by sending messages to one another. If a node gets overloaded, it temporarily increases the computational cost for other nodes sending incoming messages, and prioritizes messages to maintain a steady state. I have an old blog post that outlines an earlier sketch of this idea. You can think of Isocore’s version like fine-grained invisible souped-up Anubis. This approach makes request-response expensive, but replication via subscription cheap. Remember, the channel you’re writing to on your local machine is the source of truth, not the node you’re replicating to. Because channels are replicated, you don’t need a beefy server to serve a million people at once. You can publish to a handful of servers, and they can replicate to a few others, until everyone subscribed to your channel is served.
Third, clients and servers stand on equal footing. Capability-based access replaces server-side authorization. Instead of a server deciding what you can do based on your account, access is determined by holding unforgeable capabilities or references to resources. These capabilities can be narrowed in scope (e.g. write to read) and passed between nodes. A client node and a server node have the same kind of authority; whoever has the capability can act.
Fourth, components run locally, and not on a remote server. Applications are Wasm components downloaded on demand and executed on your own node. Indeed, an application is just a channel of application versions the application author can publish to. They import interfaces (for storage, networking, rendering) and export interfaces (for other components to use). The code runs on your hardware against your local data. A ‘web application’ isn’t some remote service you talk to; instead application state is built by synchronizing state with peers over channels, and creating unified views of data with CRDTs.
Fifth, Isocore is a ‘protocol-protocol’. Any component can publish an interface that other nodes can call. An isocore server is just a node that is always online at a consistent address and publishes some interfaces. A client is an intermittently-online node. The node-to-node messaging layer is symmetric, be it client-to-server, client-to-client, server-to-server, server-to-client; that is, encrypted, difficulty-gated, and bidirectional over the same connection. To expand on this idea:
Abridged from the specification. Isocore provides a common way to specify interfaces and run components that implement them. Today, your browser doesn’t support JPEG-XL for many reasons. With Isocore, however, if someone writes a component that can handle JPEG-XL images, your website can load JPEG-XL images. […] As new system interfaces become available—e.g. spatial UI interfaces for Isocore on a VR device—existing applications will be able to provide presentation layers for protocols that did not exist at the time Isocore itself was devised.
Finally, Isocore will be fully interoperable with the world-wide-web, at least at the beginning. I am working on a version of Isocore that compiles to Wasm and can be run on the web. When an existing browser connects to an Isocore server, the server will serve a trojanesque response containing a little ‘web Isocore’ payload. This payload will connect with the original server and other nodes that it becomes aware of, and provide all sorts of little niceties—like auth, sync, replication, RPC, and a capability-based component runtime—to whatever code happens to be running on the client.
I plan to run an instance of isocore at home.isaac.sh in not too long, and run a little community there for people who are interested in writing this type of software together. Isocore is not anywhere near being finished, but it is my hope that one day many people will run Isocore nodes and we will all write and use local-first software together.
My girlfriend wanted me to write a post today titled “Adventure Capitalists,” about how VCs should take on even more risk than they do today. I told her that I wouldn’t even know where to begin, were I to write about that, but as a compromise, I would include this aside at the end.
I hope you enjoyed that post. If you would like to read a level-headed engaging take on what someone thinks the next web should look like, why not check out this incredible post?
Daily reading: Building The Next Web
By Robin Berjon, former co-chair of many groups on the W3C. He wants the web to be built on—by structural default—local-first principles, user agency, and interoperability. (And so do I.)
Padded so you can keep scrolling. I know. I love you. How about we take you back up to the top of this page?