Ringspace Protocol Overview
The Web of Trust
What would it take to establish a closed network of creators who afforded some guarantee of humanity and verified identity to both visitors and other members of the network? Uses need two guarantees about a site in order to “trust” it.
- Identity: “Are you who you claim to be?”
- Reputation: “Are you known to act in good faith?”
No single site can provide these guarantees for itself. An HTTPS certificate does not guarantee identity; it only tells the user that their traffic is encrypted and that the encryption keying material follows a known chain of trust. Neither of those is about identity as we care about it here. Our concern is whether a site is an upstanding member of a trusted community: the webring. To establish our own chain of trust, we might try relying on other members of the ring to verify identity in some way. But mutual webs of trust like that fail in predictable ways, like gaps in the web (one member doesn’t trust the others or intentionally excludes another node) or coordinated attacks on one node.
Certificate trust relies on root certificate authorities that browsers and other HTTP clients have agreed to trust, based largely on reputation. There’s a starting point.
I think a webring with a trust layer needs one too. We’ll call this the “ring server.” Its job is to poll members of the ring for a consensus about identity and reputation—more on reputation later.
The broad strokes look like this:

The ring server provides a “polling place” for members of the ring to contribute their vote on the identity and reputation of other members of the ring. The server compiles the responses and reports back the results to anyone asking—a user visiting the site in question. That user is provided with more than just a checkmark, but a full story about what the ring thinks about this site: whether the identity matches expectations, and whether they are in good standing.
Design Details
Keep in mind this is work in progress, a proof of concept, and many implementation details remain. Still, this can be a solid starting point.
Each member of the ring produces a Ed25519 keypair for signing data. The private key is of course kept private. The public key is shared to other members and the ring server. Each member publishes a manifest containing their site’s details, including name, URL, description, and public key. There’s more in there as well, but we’ll get to that. The common location for the manifest is at .well-known/webrings.json underneath the web root. This manifest is signed by the site’s private key. When a user asks to verify a site, the site provides its manifest, allowing the server to compare the provided public key and signature with what the server has on record. The result is returned to the user, along with the voting results from the ring for the given server.
How does the ring server get the public keys in the first place? An invite code in the form of a UUIDv4 is created on the ring server, tied to the URL of the invited member site. The joining member submits a join request to the ring server’s API with the invite code and site details, including their public key. The invite code serves as authentication for this request, and should be shared by some other secure means. Ringspace does not provide the totality of what a community requires for communication; it is assumed other channels are available.
Keep in mind: inefficiency is a design goal in this system. In fact, we happily answering “No” to that most insipid of technology questions: dOeS iT sCaLe??
A webring should be a small group. Otherwise you’re not really building a human community that trusts one another anyway.
So that’s identity sorted. What about reputation?
Reputation Tracking
For me, reputation concerns human-generated material on the site, but you could imagine reputation meaning something much different for a certain webring. It might include topicality, accessibility standards, or tone. Whatever the community decides is worth upholding. So while we’re asking sites to vote on identity, we might as well ask them to vote on reputation as well. To keep it simple, the vote will comprise a binary for good_standing and a categorical string for reason. By categorical, we mean there would be a limited set of values accepted for that field—whatever the ring administrators decide.
This is where things start to get really interesting. If the votes are in sites’ manifests, you could imagine that a site being voted down would want to know who’s accusing them of what. If the vote is in the manifest, the accuser is public. That’s a safety issue, as it makes anyone voting against a site a potential target for retaliation. If we’re building a better web, we can’t introduce obvious attack vectors like that.
In addition to the site’s data, each member site manifest contains voting data on other members of the ring. A vote consists of a Standing—Good, Bad, or Neutral—and a description of the vote.
Why just three voting options? Why not a 1-10 scale, or 1-100? Voting should be simple, and provide strong signals to users about the reputation of a given site. A 1-10 scale where everyone votes 7 is not a strong signal. The Neutral option is available for the noncomittal.
One important distinction between a site’s manifest and a ring server’s manifest is that the server advertises two keys: the signing key, and a x25519 public encryption key. While Ed25519 is great for signing, it is inappropriate for encryption. Luckily, the libsodium offers the sealed boxes scheme for encrypting the votes with the ring server’s public encryption key. This guarantees that votes are only readable by the ring server. While sealed boxes do not guarantee provenance as signatures do, the member uses their signing key for that purpose.
The presence of a vote is apparent in the manifest, as the encrypted votes are visibly keyed by the vote subject URL. And even when users request validation of a site, the returned data is anonymized such that a vote breakdown is provided, along with an unlabeled set of descriptions. This provides context without clear attribution, for the protection of ring members.
Does this require significant trust in the ring server? It does, but since at least the presence of a vote would be visible in each manifest, there is at least a publicly verifiable record that a vote for (or against) a site exists, even if its content is not readable by the public.
Just as with identity, the server responds with the result, including reasons for negative votes. This allows users to accept the risk of reading the site if they so choose. That said, well-maintained rings would eventually take action against sites in poor standing, by means of revoking their public key.
An important role of the ring server is to track vote history of each member. This is to identify patterns of abuse. While public histories are anonymized, the ring administrators have a full audit trail to correlate voting activity.
Distributed Information
While the ring server provides an essential coordination role, the majority of the information comprising the ring is kept in each member’s webrings.json file. These manifests all contain the following data:
- Site-specific data (URL, public key, last updated timestamp)
- Ring server data (URL, public key)
- Votes (per-URL, vote data encrypted with server’s public key)
- Manifest signature (algorithm, signature, timestamp, original information)
With this simple manifest and a coordination server, a closed network of sites can begin to establish trust. New sites can join the ring, provided there is some mechanism for existing sites to be informed about new additions (working on that). The server can provide updated member listings upon request, or via another mechanism like an RSS feed. The latter would be particularly useful in helping to automate the update of member manifests.
Do We Really Need a Server?
This question comes up frequently in design conversations around distributed trust. How important is the hub—can we just do Oops! All Spokes? Initially, the design of Ringspace minimized (or omitted altogether) a coordination server, attempting fully peer-to-peer organization. Why was this abandoned? Roughly for the same reason you want root certificate authorities and moderators on social media. Without a central point of trust and enforcement, the possibilities for abuse are significant. Take voting for example. The central server affords the ring the ability to encrypt all votes with a single shared public key. Absent this, the most reasonable approach would be to advertise votes in cleartext—after all, how else could a user hope to see any information from them? With votes in cleartext, suddenly voters can be the targets of brigading and other forms of retaliation.
TL:DR; the server is a compromise between safety and full decentralization. I’m leaning on safety.
Ring Reputation
Ringspace does not provide a mechanism for ring reputation, although one could certainly be built on top of it. This is because I don’t actually think rings need to prove their reputation as rings. It’s not the case that people will (or did) go browsing for webrings and find one they liked. Instead, they found individual sites they liked, and discovered they were part of a ring. They then explored that ring and learned to trust other sites. Put another way, the trust is bottom-up, not top-down. As such, trust in the ring proceeds from trust in member sites.
This also somewhat mitigates the notion of a crop of malicious, slop-filled rings springing up trying to trick the user into trusting them merely by virtue of being in a ring. That’s not how it works. The individual sites have to gain trust first, and then the ring becomes trusted.
Site/Ring management
Handling all aspects of a ring, either as a ring member or administrator, can be daunting. That’s why the reference implementation provides a single binary that serves as site CLI tool, ring CLI tool, and even the API server itself! One ringspace application handles all aspects of ring community management.
Multiple Rings
Suppose a site wanted to join multiple rings. The structure of webrings.json allows, this. It would even be possible for a site to have a Good reputation in one ring, but a Bad rep in another.
Threat Model
No trust system is perfect, and what I’m proposing here has some tried-and-true attack vectors.
Server Attacks
The use of a central server exposes the ring to compromise if the ring server itself is compromised. If a malicious actor gains control of the server, vote histories could be manipulated, additional sites added to the ring, sites removed from the ring, etc. Defending this server is a top priority for a ring community. Denial-of-service attacks are also of concern, so strategies like replication, load balancing, and even deploying tools like Anubis or web application firewall in front of the server would be a best practice.
PKI Compromise
Should any private key in the ring—including the server’s—be leaked, the ring could be compromised. A single site’s private key disclosure would allow spoofing of the site and a malicious actor to potentially submit fraudulent votes. One way to defend against this is to require both the private key for the keypair and an API key for the ring server. Is this true multi-factor authentication? Kinda, since arguably the private key is something you have, and could even be stored on dedicated hardware. For now, the a signature of requests with private keys serves as authentication and authorization. I’m trying to keep complexity down here, but nothing precludes the addition of more complex authentication to this technology if desired.
Ring Splitting
This one is as much a feature as a threat vector. One or more sites could decide to take their public keys and go elsewhere, establishing a new ring and web of trust. Assuming everyone plays ball with webrings.json and doesn’t modify their own server to look at a separate file, pointing to a different server, even with the same key, would invalidate the manifest for the original server.
People are free to change their minds and make new rings. That part is not a threat, per se. If so many people in a ring leave that the ring loses quorum, then yes, that would have significant impact. But that needs to be a choice people can make. On the other hand, a site hosting multiple manifests is acting in bad faith.
Vote Fatigue
From a site owner perspective, one of the difficult parts of this scheme is maintaining votes on sites. For a 10-site ring, every owner would need to maintain 9 votes for perfect polling. We could use a “no vote == good standing” policy, but then it would become quickly evident that whoever does have a vote in their manifest is voting negatively. That defeats the purpose of encrypting the vote content. There’s a solution to this we’ll get to in the next section.
What stops someone from copying a legitimate webrings.json to a clone, modifying the content, but still claiming to be part of the trusted ring?
Well, nothing. The trick is where verification takes place. If we rely on the old webring “widgets” on the sites themselves, the potential for abuse is significant. Instead, the verification code must be in the browser itself. A browser extension reasonably guarantees that the hosted manifest is advertising the same URL/Origin as its current location. If not, the tool alerts about potential spoofing.
If an attacker attempts to change the contents of the manifest, the signature no longer verifies. If they spoof the keys to make the signature verify, the key they advertise no longer matches the server’s, resulting in a failed validation.
Vote Attacks
What if a group of ring members decides to gang up on one to destroy one member’s reputation? This is essentially indistinguishable from the design intent of the voting system. Let’s take the converse, in which a site does start acting in bad faith, and the ring responds appropriately with changed votes to reflect their displeasure. Is that “piling on?” No indeed; that’s the system working as expected.
But this is why the coordination server (and the people) behind it are so important. Fully decentralized trust and safety is no trust and safety at all. In the event of an organized “vote attack,” the ring administrators can review the votes, the reasons, and the target site to see whether the complaints are legitimate. If not, many courses of action are available. This is an area of growth for Ringspace. Potentially, admins could choose to revoke voting privileges for member sites (unimplemented), or remove them from the ring for a period of time (possible, clunky).