Consensus Without a Blockchain

A key requirement of distributed computer networks are consensus systems. Consensus systems enable a specific state or set of values to be agreed upon, without the need to trust or rely upon a centralised authority, even if nodes on the network start to fail or are compromised. This ability to overcome the Byzantine, or Two Generals Problem makes effective consensus networks highly fault tolerant.

Approaches to reaching consensus within distributed systems are likely to become an increasingly important issue as more and more decentralised networks and applications are born. IBM’s paper, Device Democracy, confirms that big blue envisions the computing platform powering the Internet of Things will be decentralised. A view that further validates what the Bitcoin and decentralised computing fraternity have known for some time, that decentralised networks offer an efficiency and robustness simply not possible in centrally controlled systems.

Why Not Use a Blockchain?

Almost all consensus based systems with the crypto currency community, Bitcoin included, use a blockchain. An immutable, append only, public ledger that maintains a database of all the transactions that have ever taken place. While the benefits this ledger provides are advantageous for many different types of operation, it also comes with its own set of challenges.

One of the most commonly cited issues is the problem of scalability. Specifically, that in order for the network to reach consensus, this increasingly large and centralising file must be distributed between all nodes. In the early days of the network this wasn’t such a major issue, however, as the network continues to increase in popularity the Bitcoin blockchain is now a 27GB file that must be synced between the network’s 6000 plus nodes.

So, if we move the concept of consensus into the context of a decentralised data and communications network, we can start to evaluate how effective the existing bitcoin consensus mechanism would be.

Within a data network, if an end users makes requests via the client, they expect to be able to set up their credentials, store or retrieve their data instantaneously and know that the operation has been carried out correctly. In essence, they require network consensus to be reached at network speed (in fractions of a second), a tricky problem to solve in a large decentralised network. Within the Bitcoin network, this first round of consensus occurs after 10 minutes, with each further block consolidating the transaction. This transaction speed is clearly unacceptable in a data network environment and any attempt to increase block speed to circumnavigate this issue will have significant negative consequences on security.

Close Groups

What we need is a decentralised consensus mechanism that is both fast and secure. But, how do you reach consensus rapidly on an increasingly large group of decentralised nodes without compromising security?

The answer lies within close groups. Close group consensus comprises utilising a small group, a fraction of the network size, to state a position for the whole network. This removes the need for the network to communicate with all nodes. Within the SAFE Network the close group size is 32 with a quorum of 28 required to enable a specific action to be taken. An example may help explain this point.

Lets assume that Alice was to store a new photo. As Alice stores the image it is encrypted and broken up into chunks as part of the self encryption process and passed to a close group of Client Managers. This close group are comprised of the closest vault IDs to the users vault ID in terms of XOR distance. This is distance measured in the mathematical sense as opposed to the geographical sense. The Client managers then pass the chunks to thirty two Data Managers, chosen by the network as their IDs are closest to the IDs of the data chunk, so the chunk ID also determines it’s location on the network.

Once consensus is reached, the Data Manager passes the chunks to thirty two Data Holder Managers, who in turn pass the chunks for storage with Data Holders. If a Data Holder Manager reports that a Data Holder has gone offline, the Data Manager decides, based on rankings assigned to vaults, into which other vault to put the chunk of data. This way the chunks of data from the original file are constantly being monitored and supported to ensure the original data can be accessed and decrypted by the original user.

A client trying to store a piece of data that currently has no consensus because they don’t have enough safecoin, for example, would be refused.

Cryptographic Signatures

This use of close group consensus is not used for every operation however. Utilising this complexity for every operation would be inefficient and put an unnecessary load on the network. Close group consensus is only used for putting data on the SAFE Network, such as a file, a message, or in time, computation. For making amendments to data, such as changing the contents of a file (creating a version), or sending a safecoin to another user, cryptographic signatures are used to authorise the action.

Cryptographic signatures were first conceptualised 50 years ago and products containing their use were first widely marketed in the late 1980’s. The SAFE Network utilises RSA 4096, a very well established and tested algorithm.  Cryptographic signatures mathematically validate the owner of any piece of data and can prove this beyond any doubt, provided the user has kept their private key safe. If the end user is the owner of any piece of data and can demonstrate this fact, by digitally signing their request with their private key, the network permits them access to change the data.

Avoiding the double spend

The lack of blockchain raises one additional question though. With a distributed immutable ledger, how do you eliminate the problem of the double spend? This is something managed so effectively by the Bitcoin network, which verifies each and every transaction and records it on the blockchain.

Safecoin, the currency of the SAFE network, is generated in response to network use. As data is stored, or as apps are created and used, the network generates safecoins, each with their own unique ID. As these coins are divisible, each new denomination is allocated a new and completely unique ID, highlighting the importance of having a 2 ^512 address space.

As the coins are allocated to users by the network, only that specific user can transfer ownership of that coin by cryptographic signature. For illustrative purposes, when Alice pays a coin to Bob via the client, she submits a payment request. The Transaction Managers check that Alice is the current owner of the coin by retrieving her public key and confirming that it has been signed by the correct and corresponding private key. The Transaction Managers will only accept a signed message from the existing owner. This proves beyond doubt that Alice is the owner of the coin and the ownership of that specific coin is then transferred to Bob and now only Bob is able to transfer that coin to another user. This sending of data (coins) between users is contrary to the Bitcoin protocol where bitcoins aren’t actually sent to another user, rather bitcoin clients send signed transaction data to the blockchain.

This is not an attack on blockchains

The lack of blockchain means that it is not possible to scrutinise all the transactions that have ever taken place or follow the journey of a specific coin. In this respect safecoin should be thought of as digital cash, where only the existing and previous owners are known. Users who value their anonymity will see this as a distinct advantage and it should not be ignored that this enables an unlimited number of transactions to occur at network speed.

However, this blockchain-less (yes a new word) consensus mechanism does not provide the scrutiny and transparency desirable in some financial systems, or in general record keeping. This raises an important point. It is not the intention here to suggest that one consensus mechanism is better than the other, or that there is one consensus mechanism to rule them all. Instead, we should consider that each is better suited for a different purpose. This is why the incredibly bright future for distributed consensus will come in many forms, with each iteration better than the last.

To find out more about the SAFE Network and how it works, please visit our System Docs.


    1. As I understand it you can’t audit in the normal way because there is no ledger, no central record.

      What you do have though is maths – the precisely details of how Safecoin are allocated and eats to sample their distribution within the possible address space. So, I believe, it may be possible to estimate the issue at any point in time, for example, by defining the Safecoin address space to determine what proportion of the address space has been allocated to a Safecoin.

      I’m speculating here though, so don’t take this as definitive, but based on my understanding of discussions in the forum.

  1. It will be possible to estimate the number of safecoins that have been allocated by the network as the difficulty of farming requests being accepted will provide a good indication. The difficulty will change dynamically and autonomously as too much resource being supplied will increase the difficulty (to avoid over supply) and lack of resource will reduce the difficulty (to avoid under supply).

    It will be very difficult/unlikely that anyone will be able to index allocated coins.

    Thanks for the question, I hope this info helps.

  2. Thanks for the explanation, I’ve been looking for a clear description of Maidsafe consensus for a while and this answers it nicely.

    So, two questions that were not answered here:

    1. What incentivizes nodes to participate honestly?
    2. What is the backup/failsafe procedure in the event of a successful collusion on a safecoin? Particularly, if 28 of 32 nodes collude in order to sign off on an infinite number of double spends, then what stops the hyperinflation from getting out of hand before it can somehow be detected and stopped?

    The particular nasty attack vector here is that Evil Malicious Mallory writes a patch which, during every consensus round, (i) broadcasts a message to the other co-participants in the consensus group stating “hey, I am evil. What about you?”, and (ii) if it finds 27 other “evil” participants, colludes to hyperinflate the safecoin or do whatever nasty consensus-breaking mechanism. Mallory then pays $5 for every farmer to download this patch. What is the individual incentive not to download the patch? Alternatively, we could have the evil behavior be refusing to sign off on transactions, in which case a mere 13% non-altruists can lead to a halt of the network.

    Or are you okay assuming a large proportion of altruism and bribe aversion as a security assumption?

    (Note that “bribes” in cryptoeconomics do not have to correspond to literal bribes in the real world; they could be an instance of the government applying regulatory pressure to coerce large farms to act in certain ways, or applying pressure to force software developers to add bugs, etc. Given Maidsafe’s emphasis on privacy and freedom, such possibilities are probably something you should care about).

    1. Cheers for the comment.
      “What incentivizes nodes to participate honestly?”

      Nodes are programmed to carry out very simple predefined and deterministically measurable tasks. Not complex languages or similar, in this way many of the attack vectors are confined to what can happen in that realm. So for instance a node need not act with any ‘feeling’ altruistic or not, it only need behave as expected or be noted de-ranked and removed. The key is a very limited specific and measurable set of rules that must be followed. There is no notion of honesty in these nodes only logic. This is like the ant analogy I use a lot, complex systems of decentralised control who follow very simple rules can create extremely sophisticated communities, but the rules must be simple and measurable. Then these can evolve over time, but truly decentralised will mean a very minimal set of rules and extremely core algorithms and data types with a genus that is clear and concise.

      In the collusion attack you mention the nodes would

      1: Have to understand Mallories request (we will not be implementing code to answer Mallory 🙂 )
      2: Not report her to her close group

      “Or are you okay assuming a large proportion of altruism and bribe aversion as a security assumption?”

      No! but neither would any system. If Mallory sent emails to all bitcoin miners then yes it is a problem for bitcoin. Here she won’t know the participants. In fact most users will not even know the address of their vault or care about it. They will care their wallet address is logged locally with the vault to make payments to.

      Hope that helps a bit Vitalik, it’s a pretty huge subject and although the rules are simple the number of personas is large and each has their own very confined measurable and cryptographically secured via a decentralised PKI type system as to identify them properly. So its a decent amount to get though.

      1. No worries. Thanks for the response, it’s not straight forward and none of us will know the outcomes, like Steve jobs said you can join the dots looking backwards not forwards, I try not to spray ddt on mossies :-). In any case the nodes may be owned by somebody but not controlled. Their action are directly managed by the network, so bad behaviour is spotted by the group who manage the node and outbound deterministic requests are accumulated and signature checked. So changing the vault code will simply mean you will have a defunct vault as it is de-ranked.

        I know your philosophy regarding incentives and hear it a lot from others, but I disagree with it on a couple of levels (not all). I think this is fine as I am not a game theory follower and certainly not bought into any Nash equilibrium theories as you seem to be (closer to) and that’s also fine. I believe you can have a complex system that has inherent incentives as in nature, why does the ant pick up the huge leaf, she could let others do it and just munch away. The discussion could take years and I still say vim is better than emacs kinda thing. Anyway the ant does it because it knows if everyone does their job they thrive, not through altruism but inbuilt logic.

        In decentralised systems there are a huge number of indirect incentives such as caching data, performing a transaction for apparently no cost and there are also a huge number of ways to ensure correctness, if a node says 1 + 1 = 3 then others can tell it is rubbish (or sends wrong message to wrong node), if it signs this then it can be reported to the nodes close to it with proof or bad performance. They will de-rank as they want to survive and the node loses out. To survive the node will want to not make such mistakes and will act responsibly or kicked out. There are a miriad of these rules and checks happening in real time. So incentives need not always be direct and certainly not always directly measurable. So you may say why cache you get nothing, then look much deeper and you actually get a lot from this apparent act of selflessness (it is actually selfish to do what is asked of you in the network, we spend a huge time making sure of that) but it’s not immediately measured and the user is not directly incentivised to do so.

        Then why report bad behaviour, there is no payment and again look much deeper, the payment is survival of the species (or version of the code 🙂 )

        there are enormous threads going very deep into much of this on and some papers on the site etc. so you can dive very deep into the network. I recommend Eric’s lectures on the network as there are things not immediately obvious in xor space and networks in particular. It keeps our uni people happy writing papers on it all and the security we have managed to achieve though it. So please feel free to dive in, it’s hugely interesting, but very different for sure.

Comments are closed.