Nature

Introduction & Technical Overview of SAFE Consensus

The features included in decentralized networks can be quite varied based on the proposed goals of the technology. From the sharing capabilities offered by Bittorrent to user privacy enabled via Tor’s routing protocol, network designs directly reflect the mission set forth by their architects. Within autonomous networks that rely on data and system integrity where network critical actions may fail or produce faulty outputs, consensus mechanisms are an important feature which optimise reliability. Just like in greater society where important business or policy decisions are typically deferred to a board or committee rather than depending on a single individual, computer systems which manage data and user accounts in a diverse environment face quite a lot of potential for parts of that system to be inaccurate or unresponsive. In commonly owned and openly participatory networks, the risk of malicious behavior adds even greater importance of consensus around its current state and actions taken.

As a decentralized, peer-to-peer network with an initial focus on reliable storage and communication, the SAFE Network requires a high standard for data integrity. User files and account information are secured and stored in such a way that no major outage should affect access of data for the main network. While most P2P networks gain in security from a global network of distributed nodes (the SAFE Network further obfuscates traffic using global multiple hop routing), critical decisions for maintaining security of stored data are kept “localised” in SAFE for increased efficiency and privacy.

The Nature of Consensus: Global & Local

Before diving into the specifics of SAFE consensus, let’s do a bit of comparison with other recent developments in decentralized consensus design. One of the more interesting implementations was introduced several years ago with the launch of Bitcoin. The combination of proof of work and blockchain technologies has enabled an extremely reliable way to track a permanent and ordered list of cryptographically secure transactions. All nodes within the Bitcoin Network store and verify a copy of the blockchain which protects against tampering with historical transactions and faulty reporting of newly created ones. Unfortunately, the global and public nature of Bitcoin’s consensus process creates drawbacks with regard to efficiency and privacy. . The fact that all nodes in the network need to track and agree on a single, infinitely growing ledger has proven scaling problems and simplifies the ability for deep analysis of the ledger and user profiling. While various efforts are looking to solve theses issues, the years of research carried out by the MaidSafe team has resulted in a consensus mechanism designed specifically for privacy and efficiency – a different goal than the proof of concept architected by Satoshi Nakamoto for Bitcoin. This protocol is the basis of the SAFE Network and when compared to Bitcoin takes a very different approach, enabling actions and verifying network states based on local node consensus.

Those following MaidSafe may know of our preference for the emulation of natural systems which have hundreds of thousands of years of testing in diverse environments and harsh conditions. This philosophy can be extended to help understand a high level reasoning for our approach to consensus. Animal societies of all kinds localise decisions to reach agreements about immediate threat levels and other states of being while brains have evolved to localise neuron function for efficiency. Additionally, local consensus allows for the more sophisticated societies formed around humans to make intelligent decisions about sensitive actions such as an elected committee deciding on substantial policy changes for a community. Of course, these social situations come with their own vulnerabilities if the individuals involved in consensus decisions have similar self interested goals which do not reflect the interest of that which they govern. However thankfully, in computer networks, measures can be implemented which prevent local consensus abuse (or misuse) by nodes and it all starts with the foundation on which the network is built upon.

XOR Close Group Consensus

A recent post on this blog titled Structuring Networks with XOR outlines the basics of defining closeness within the SAFE Network’s distributed hash table (DHT) implementation. If you are not familiar with the foundation of Kademlia-based DHTs, that post will be a prerequisite to effectively understanding consensus process in SAFE that we will now dive deeper into. As we explore how such local consensus is approached using XOR closeness, it is important to keep in mind that “closeness” in this sense does not refer to geographical closeness, rather from a perspective of network address. So when nodes have IDs which are close in distance according to the XOR calculation, their physical locations could be on opposite sides of the planet.

By relating network elements in terms of XOR closeness, a unique group of the closest nodes to a given object can be determined and subsequently act in a managerial role for it. As long as these objects have unique IDs which can be translated to binary, everything from data to nodes can be related to each other in terms of distance and assigned a close group of network nodes (or as we call them, Vaults). This close group of Vaults can take on a variety of purposes depending on the object they surround, but center on management of data and node integrity consensus processes. The graph below shows how we can relate any object with ID n to its four closest online Vaults.

closest-group

Whether the data is public keys, encrypted personal files and client account information, or cryptocurrencies, close group authority is the basis for the SAFE Network’s ability to self-manage and self-heal. As long as nodes are not able to predetermine their location in the DHT address space, the inclusion within a close group is essentially random and drastically reduces any chance of group members colluding to disrupt the integrity of an action on particular piece of data. A future post detailing the security against various types of attacks will dive deeper into concepts like how IDs are assigned but for the purpose of understanding the consensus mechanism, we can view it as random. Further, each group consensus requires a quorum of vaults for authority approval which protects against a minority of unreliable nodes. The exact quorum to group size ratio will be investigated as part of ongoing test networks to balance efficiency with security. Additionally, as vaults in the network go on and offline (referred to as network churn), the members in close groups will be in a constant state of flux to accommodate for new or lost nodes.

Node and Group Authorities

The variety of close group authorities formed in SAFE are fundamentally determined based on the ID of the object which the vaults within that group surround. These distinct consensus responsibilities are referred to as personas. Client nodes act as a complementary authority for user authorised actions within the network and differ from Vault nodes as they do not route data but instead act as an interface for users to connect into the network and access or upload data. Each Vault node can also be considered an authority with the extremely limited capabilities of responding to requests for data they store. Using cryptographic key signing, the network verifies authority based on messages sent by Clients, personas and individual Vaults.

Some actions require just Client and Vault cryptographic authorisation (such as reading data already uploaded to the network) while others involve at least one persona consensus as well (such as storing new data to the network). Autonomous actions require no Client authority and solely rely on persona consensus and Vault cryptographic authorisation (such as network reconfiguration to reassign data storage when a Vault goes offline). These autonomous processes are what enables the SAFE Network’s ability to heal itself and manage nodes and stored data without the need for any centralised authority or Client action. This is a major difference from Bittorrent’s implementation of Kademlia which does not provide availability of data – if a few Bittorrent nodes hosting an niche piece of content all eventually go offline, there is no network procedure for reallocating that data and it therefore becomes inaccessible.

The Four Authorities

The network’s routing protocol defines four authorities. There are two node authorities: Client and ManagedNode (Vault storing data); and two fundamental persona types: ClientManager and NaeManager (Network Addressable Element) consensus groups. Fundamental persona types are defined based on placement in the DHT and XOR closeness to object IDs while ManagedNodes are determined based on their inclusion within a NaeManager group. The persona types are subcategorised into specialised responsibilities based on the type of data or account which they surround. It is expected that personas will overlap, meaning a single Vault might be included within several close groups simultaneously while also storing data assigned to it.

Authority Network
component
Persona Group
Sub-types
Responsibility
Client Client node N/A Private & public
data access
ClientManager
Persona
Vault node
close group
MaidManager Client authentication
& PUT allowance
MpidManager Client inbox/outbox
NaeManager
Persona
Vault node
close group
Data-
Manager
Immutable
data
GET rewards & data
relocation
Structured
data
GET rewards, data
updates & data
relocation
ComputeManager (TBA) TBA
ManagedNode Vault node N/A Store data & respond
to data requests

Client

While Clients have authority outside of group consensus, as previously mentioned, they have limited control and are never in a position which affects data reliability. Clients are the windows into the network for users and therefore will control data and messages from a client-side perspective such as encryption and decryption for uploaded data. For each client connection into the network, there is an anonymising proxy node which relays all data to and from destinations within the network, but the proxy does not have the ability to read any of it (for those familiar with Tor, this function is akin to a “guard node”).1

ClientManager

The ClientManager persona consists of Vaults closest to a Client ID and is subcategorised into MaidManager and MpidManager personas. MaidManager (Maid Anonymous ID) is adjacent to the personal ID associated with a Client and has the responsibility of managing that user’s personal account functions, such as upload allowance and authentication. MpidManger (Maid Public ID) surrounds the public ID associated with a Client and is responsible for maintaining the Client’s inbox and outbox for sending messages to other Clients.

NaeManager

The NaeManager persona consists of Vaults closest to network addressable elements such as data chunks. The initial release of SAFE will focus on implementing the persona type DataManager to take on the task of enforcing data storage reliability with future plans for ComputeManager persona type for reliably computing data. DataManager is further subcategorised into functions managing immutable data and structured data. ImmutableDataManager are a group of Vaults closest to the ID of an immutable chunk of data and manages GET rewards (safecoin farming) for successful ManagedNode responses and the relocation of data when one of these goes offline. Immutable data chunks are encrypted pieces of user uploaded files with IDs derived from the data chunk itself. A file is only able to be reassembled by users with access to the specific data map, more commonly known as a symmetric key file. StructuredDataManager is closest to the ID of structured data which are small, mutable data files owned by users on the network such as DNS entries, safecoin and data maps for immutable file reassembly. In addition to managing GET rewards for ManagedNodes storing the file and relocation responsibilities, StructuredDataManager will also acknowledge updates initiated by owners of the data (such reassigning ownership to another user).

ManagedNode

Like Clients, ManagedNodes have limited control in the authority functions they take on as individual Vaults. They only have control over data which they store and responses to requests for that data. Since all uploaded data is stored redundantly over at least a few ManagedNodes they are never in total control of any data piece. All Vaults in the network may store data and will take on this limited ManagedNode authority over a piece of data when assigned to the DataManager group surrounding that data ID. This means all DataManagers will also be ManagedNodes storing that data. The role a Vault takes on as a ManagedNode storing (or eventually computing) a piece of data is directly dependent on its role as a DataManager group member for that data, but the two authorities are nonetheless distinct.

The illustration below shows relationships of a Client (n) and the closest online Vaults which make up their ClientManager (n+2, n+5, n+7, n-8), a data chunk (m) and the closest online Vaults which make up its DataManager (m-1, m-2, m-3, m+7) and those DataManager Vaults acting also as ManagedNodes storing m.

circle-groups

Mapping Network Actions

With these various roles and authorities in mind, let’s explore some actions which can give a more complete view of how the network functions. For simplicity, we’ll use the same groups as the previous illustration for all examples. Each action within the network originates from one of the four authorities. Client initiates actions for storing new data or modifying and reading existing data they have access to while DataManager authority initiates restoring a piece of data when a ManagedNode becomes unresponsive. A ManagedNode will never initiate an action and only acts in authority to requests for data. Every action additionally involves cryptographic verification of authorities involved at each step.

PUT

When a logged in user uploads a new piece of data to the SAFE Network, a series of authorities come into play for securely and privately storing it. If a Client is putting a standard file type (such as a document or image) onto the network, it will first locally self-encrypt the data into chunks while creating a data map (symmetric key) and upload each data piece to be stored and managed within its own location on the DHT. As mentioned, these immutable data chunks have unique IDs derived from the contents of the encrypted chunk itself while the decryption key, or data map, is uploaded with its own unique ID within a structured data file. The self-encrypt video linked above illustrates how the data map is both a decryption key and list of data chunk IDs which double as pointers for locating them in the network. The authorities involved in uploading a single piece of data (whether immutable or structured) are as follows:

circle-groups-PUT

The Client sends a signed message via bootstrap relay node with the data chunk to its own ID which is picked up by the MaidManager close group in that part of the network. After checking the authority comes from the Client, a quorum of Vaults within this group must confirm the storage allowance for the Client before deducting an amount then sending the data and a message signed by the group to the location in the network where the ID for that data exists. In the case that the data already exists, no allowance is deducted and the Client is instead given access to existing data. If it does not exist, the message from the MaidManager is picked up by the closest group of Vaults to the data ID as a new DataManager, which checks the authority coming from the MaidManager persona. DataManager Vaults then initiate storage on each Vault in the group as individual ManagedNodes. Each ManagedNode sends a success response (or fail in the case of insufficient resources) back to data ID which is again picked up by the DataManager and forwarded back to the Client ID which is in turn picked up by the ClientManager and forwarded back to the Client.

GET

The action of reading a piece of data which a user has access to is a simpler process as there is no need for MaidManager consensus. Clients can send messages directly to ManagedNodes so long as they know the ID for the data which they store locally. In fact, there is no direct consensus needed to retrieve a data piece however in order to reward the data holder with proof of resource, DataManagers confirm successful response of data.

circle-groups-GET

The Client sends a message to the ID for the data they are looking for which is picked up by the closest ManagedNode among the DataManager group and responds with the data itself. If there is a problem obtaining the data from this Vault, a short timeout will trigger the second closest ManagedNode to instead respond with the data and so on. The DataManager group confirms response by the Vault and sends a reward message with a newly created structured data representing a unique safecoin to the public address of that particular node. A future post will go more in depth into safecoin creation, handling and cost metrics.

Relocate Stored Data

When the network churns and ManagedNodes holding data go offline, it is only natural that the network assigns data storage responsibilities to another node for preserving data retrievability. This is a network critical action and is one of the few instances of actions being initiated by a persona rather than a Client. The absence of a missing ManagedNode will be detected quickly as periodic heartbeat messages are sent between all connected nodes.

circle-groups-churn

A ManagedNode storing a particular piece of data that is unresponsive to a connected node will have its closest Vaults alerted. Once confirmed by the DataManager maintaining that data chunk, they choose the next closest Vault to the data ID to become a new ManagedNode and member of the DataManager group. The newly chosen ManagedNode sends a success response to the rest of the close group which is then confirmed.

Close Group Strategy

Decentralized data reliability is an important feature of systems which remove dependence on central parties. Furthermore, such systems which aim to preserve privacy for users must also consider their methods used for consensus and understand the trade-offs. Bitcoin’s global consensus around a single public ledger helps guarantee network and transaction status but lacks in scalability and privacy. By segregating vault responsibilities in SAFE based on XOR closeness, the network is able to achieve reliable data and network status maintenance without the need to reveal any information globally. The potential for attacks on consensus mechanisms also varies with their implementations. While no system can claim absolute security, sufficient measures can be put in place to reduce potential by increasing the difficulty and necessary resources for staging such an attack. In SAFE, for example, a series of close group consensus in a PUT forces attackers to control a quorum in multiple close groups. Additionally, network churn in such a large address space facilitates the constant distribution of new IDs makes Sybil attacks more difficult to attain as nodes are constantly showing up in random parts of the network. Node ranking can also be used in close groups to detect disagreements in consensus and downgrade or push out unagreeable nodes.

With the previous introduction of XOR properties in DHTs and overview of their use for close group consensus within SAFE, we hope we have provided a better general understanding of data reliability in the network. This authority process is used for every action on the network including managing safecoin ownership and transfer. Expect future posts which dive into details of attacks on close group consensus (and mitigations), data types in SAFE, and safecoin functionality including the built-in reward system for data holders, applications and public content. In the meantime, questions or discussion about the consensus approach are welcome in our community forum.

 

1This proxy also serves as the initial bootstrap node for introducing nodes back into the network whether a Client or Vault. All nodes start out as a client and negotiate connections to their future peers via their proxy node. The bootstrap node has no relationship to the Client or Vault (in terms of closeness) and is randomly chosen from a list of hardcoded nodes in a bootstrap config file, taken from a cache of previously used bootstrap nodes or through a service discovery mechanism such as a vault already connected to SAFE on the same local area network (LAN).

Further resources:

The Language of the Network (2014)

Autonomous Network whitepaper (2010)

‘Peer to Peer’ Public Key Infrastructure whitepaper (2010)

MaidSafe Distributed Hash Table whitepaper (2010)

MaidSafe Distributed File System whitepaper (2010)

Opting into transparency

One of the biggest misconceptions about the Safe network amongst those interested in bitcoin is in the tendency to assume it’s based on blockchain technology. In fact, the two technologies were developed simultaneously. By the time Satoshi had released the bitcoin whitepaper in 2009, David had already been conceptualizing and iterating on his own ideas for MaidSafe for over 2 years. However, this isn’t to say that the two technologies aren’t related, especially in the understanding of the importance for decentralized consensus. Their basis for sustainable and resilient systems might as well be solving the same problem but it is the method which makes the two protocols very distinct and achieve complimentary solutions.

With all the amazing innovation that blockchain technology brings, it is important to keep in mind the underlying implications of a global, public ledger. At face value, it becomes fairly easy to deduce the inaccuracy of calling the system and actors within it anonymous. A global public ledger is just not the basis for an anonymous protocol even though steps can be made to increase anonymity. In the end, however, these efforts are working around the fact that every transaction is publicly recorded. This same public recording is how bitcoin and other blockchain technologies reach consensus. There is one and only one bitcoin blockchain that all the miners agree to. Transparency is the result of Satoshi’s consensus scheme.

Let’s now dip into the Safe network which has a similar purpose of disintermediation, but does so with privacy and anonymity built from the core. Without getting too deep into the technical details of the Safe network, and as with previous analogies, I invite you to keep in the back of your mind imagery of each vault in the network as a single ant amongst a colony. The Safe network runs on ANT* (Autonomous Network Technology), a system where in addition to a vault’s purpose of storing and computing chunks of data, it also plays a specific managing role in relation to another nearby** vault. Groups of vaults play the same managing role to provide redundancy in the event that another vault goes offline. This group action is what allows for basic consensus without an entire network knowing an event happened. Of course this division of management is just the beginning of keeping such a complex system autonomous, however, it is this mechanism which gives the Safe network its anonymous consensus scheme.

Putting the two technologies side by side again, we can see how they compliment each other as solutions to decentralized consensus. Users of blockchain networks opt into privacy layers for increased anonymity while users on the Safe network will opt into transparency layers for increased accountability. Individuals and small groups may prefer to stay private or anonymous, but when interacting with larger businesses or organizations, accountability becomes increasingly beneficial. Defaulting to anonymity for individuals through the Safe network will increase freedom while the greater requirement for transparency from organizations will be realised via blockchain public ledgers, minimizing corruption. There is certainly a balance between privacy and transparency and with these two technologies available, we may finally be able to achieve real digital freedom as individuals while also giving larger entities efficient tools for providing accountability.

*For a more technical overview of the Safe autonomous network, check out the SystemDocs.
**In the Safe network, we use XOR distance which is an important distinction for privacy and very different from the typical Euclidean definition of distance.

Using Complexity in Nature to Understand the Safe Network

Complexity is vital to the organizational patterns shown in decentralized systems.  As a recurring theme when discussing the Safe network, the relationship to natural phenomenon can help less technical individuals better understand how the system works. From the most basic of structures to the systems which encompass planets and galaxies, there are nearly infinite examples of nature organizing itself. The Safe network is a global system built with these phenomenon in mind in order to achieve self-regulation, self-rehabilitation and self-organization without a central party.

David Irvine commonly references ant colonies which show extremely complex behavior rooted in the cooperation and communication between individual ants – similar patterns also develop in other social insects. It is worth noting that the very basic sets of rules which individual ants follow helps with focus and conserving resources while the division of labor they exhibit allows those basic rule sets to be diverse when accounting for the entire colony working together. While both types play important roles in the colony, there is no need for soldier ants to know the tasks of forager ants (and vice versa) thus a division of knowledge minimizes the amount of brain processing which in turn reduces energy usage.

Similarly in the Safe network, vaults created by individuals are assigned by the network to a specific persona which will enable it to follow only a specific set of rules. These personas and the correlating rule sets are for managing parts of other vaults in the network. Though there are several aspects to manage for each vault, those managing personas will always be distributed across many vaults and will never be concentrated in one. Like with ants, basic rules within a division of knowledge/labor mechanism can conserve resources for the individual vaults while enabling a wider variety of functionality to the network as a whole. Additionally, ranking mechanisms based on vault health and resourcefulness to the network determine which vaults are assigned which managing personas.

At a similar scale to ants, the creation of swarming patterns in birds and fish depict another great emergent behavior that only arises through interactions with many components. Natural, emergent, decentralized systems are evidence that cooperation is evolutionarily beneficial to achieve a higher level of organization and general progression. Systems of interacting organisms create incentive to cooperate and to fend off both external offenders and internal threats.

Decentralized order can also be found in the interactions of non-living objects. An example at a much smaller scale is in snowflakes and coral reefs which create fractal patterns from the shapes and movement patterns of independent water crystals and ocean minerals respectively. At a much larger scale, weather patterns form due to relative air pressure between points on the globe and because of these patterns we are able to predict future behavior. The complexity shown in weather is a great illustration of the balancing effect between order and chaos at a global level and is a great comparison for how the Safe network can manage storing chunks of distributed, encrypted data. Like all forms of complexity, the Safe network will be constantly morphing and optimizing in its never ending quest for equilibrium.