Personal Opinion

Theories on Incentives, Scaling & An Evolved Economy Pt 2

In the previous post of this series, I described ways in which human monetary systems have typically tended to scale and how it compares to the history of the Internet. Both systems are functioning with models which have incentives to centralise at scale which introduce vulnerabilities as dependence on the central points grow. Next, I want to extend this exploration to the exchange and ownership of property on the Internet and scaling intellectual property systems. Reviewing contrasts and similarities in physical property and intellectual property can help to shed light on the challenges we face in managing those systems as they grow – online and off. (Disclaimer: Some references in this post assume having read the content from the first part.)

Properties of Property: Personal, Private and Public

Having explored historical implementations of economies and their breakdown points at scale, we will find similarities to systems dealing with ownership of property. While cultures around the world have different perspectives on property, there are three basic categories in which to categorise ownership: personal, private and public.

Personal property are belongings actively used by an individual such as a toothbrush, a homemade apple pie or a wrist watch. Public or common property are objects or land openly used or consumed by any member within a community such as a common walking path or a common garden maintained by some or all of it’s members. Finally, private property is ownership usually in the form of real estate but can be seen more broadly as relatively unused property owned by a person or group as well as any property requiring significant external security (usually provided by state-based registration/titles). Examples of private property are banks, rental apartments and summer sailing boats. The lines between these categories can blur at times but I will not address those cases for the purpose of simplicity.

Abundance within Meatspace Economies

The distinction between private and public ownership models and their respective abilities to scale are important aspects in economies to explore. For example, a community where food and the labor to sustain the growth of food is abundant (perhaps out of love for gardening), the food itself is not very useful as an exchange of value as it is produced without distinct demand market forces. If one of these gardeners offers four apples to a blacksmith in exchange for a specific gardening tool, by normal market standards the trade doesn’t make sense since the produce is in abundance already and further will rot if not consumed. However, it would make sense for the blacksmith to view the gardener’s labor as an important value to the community and to support them by simply giving them the tool and continuing to eat the produce at their leisure. The two versions of this exchange may sound equivalent but the incentives and transactions have quite different characteristics as the system grows.

Scaling Public Gardens

The preferred type of trade in the above scenario of abundance, also referred to as a gift economy, can be related to the concept of tracking Rai stone ownership (which we explored in the previous post) in that it can sustain itself as the scale stays small but beyond that, a garden maintainer might realise the relationships with those consuming the produce have become less beneficial and thus are no longer able to depend on the precedent of established community connections to depend on gifts from others. At a larger scale, it makes sense for these producers to prefer working privately because the abundance is reduced when growing for more individuals while putting a price on the produce helps to guarantee their labor has a similar or greater rate of return and isn’t likely to be taken advantage of. This price can come in the form of credits (by tracking the number of fruits and vegetables individuals receive) or as a relative price to a more fluid value of exchange such as currencies. However, even when a privately exhibits improvements in ability to scale compared to commonly managed garden, it is not void of the further vulnerabilities brought on by economies of scale. Reducing the cost of input for a growing output is a natural tendency for business which at a certain threshold leads into a rather self destructive cycle of incentives towards more centralised control.

Properties of Intangible Property

Now that we’ve established a perspective on basic concepts in meatspace property, we can map them fairly accurately to intellectual property for a better understanding of the technology industry’s challenge to scale labor that goes behind open source development and public content. Intellectual property (IP) can also be categorised between personal, private and public. Personal IP are any ideas or data that we would keep to ourselves so only ourselves (and perhaps a very few others) could have access such as health records and personal thoughts. Public IP in contrast are ideas and information that are shared free and openly for anyone to access and use like weather reports, certain online academic journals and content within the public domain. Finally, intellectual property considered private would be data controlled for restricted use such as restricted software code or online publishing which require payment to view contents. To further sub categorise private intellectual property one can consider both those protected by avoiding physical reproduction such as DRM (Digital Rights Management) and those protected by legal security such as copyright and even free software licenses. While licenses like GPL and MIT promote open standards, the fact that there’s a restriction of use introduces aspects of private, owner controlled controlled IP. Not to say this is a wrong method, (I heavily stand by MaidSafe’s decision to release code as GPLv3) but in the context of these definitions and for the next part in this series, I think it’s important to keep this in mind.

Scaling Public Idea Gardens

So, with these distinctions there’s obviously a vast amount of private IP out there of all shapes and sizes as our society reacts to the globalisation of ideas. Patents and copyright systems have been put to use for several hundreds of years mostly aiming to resolve the labor that goes into creations with inherent lack of scarcity like blueprints to inventions and writings and generally give an economic advantage to the creator. Unfortunately, this sort of solution essentially puts a price on access to a piece of intellectual property and privatises ownership similar to the “once localised and looking to scale” community gardens but even more drastic. The physical limit for the abundance of intellectual property to be overcome is comparatively miniscule and shrinking further thanks to improving storage capabilities of computers. So like the some members of a community may be inclined to tend a community garden, others might selfishly enjoy creating intellectual property such as inventors and designers but as soon as their labor is easily taken for granted, the economical balance is disrupted. We should be able to support production of intellectual property at a small scale but the globalisation of communications and knowledge over the past several centuries has made that not practical.

Code and Content as Assets

Many activists involved in resisting DRM, copyright and patents often evangelise that private IP conflicts with long-term progress and inhibits essential freedoms. Finding economical answers to better serve production of intellectual property so more people are incentivised to choose sharing ideas publicly rather than keep them privately protected is certainly a difficult task and most likely will not be resolved via a single solution. Most mainstream research journals see the IP they publish as assets to protect in order to sustain their business and thus make the access artificially scarce by marking it with a price. Many VCs who fund software development similarly see the code produced by developers as a significant asset which protects against competition implementations. In some cases, there can be agreements between corporate competitors to build public solutions for standardising purposes but this is not a reliable solution and has potential to lead to bad implementations of those standards that many others must begin to rely upon. To integrate intellectual property into our economy properly, we must work to evolve the economic system itself rather than force ideas to take on characteristics which make them artificially scarce.

In the final post of this series, I will overview solutions which have pushed the boundaries of how we deal with money and intellectual property and more specifically, what SAFE brings into the mix. Scaling is the main factor in the breakdown that we see in many systems from currency and property to the Internet itself so focusing our sights on this problem is essential. While MaidSafe is working on a single solution to address these issues via an evolved Internet, the previous experimentation and future supplementary projects from other parties will be necessary to grow a real foundation for a global economy and digital society.

Glocalization of Internet Freedom

For the first week of March several hundred internet freedom activists from all around the world gathered for the Internet Freedom Festival in the Las Naves collaborative space in Valencia, Spain for a wide variety of sessions addressing tools, policies and perspectives within privacy and security on the Internet. Trainers, developers, journalists, technologists and those simply curious to learn from 76 countries traded perspectives and skills while forming bonds to continue collaboration post-festival and strengthen support for each others work. Previously named the “Circumvention Tech Festival”, the event organizers placed a strong emphasis on creating a safe space for open collaboration without compromising privacy and identity for those attending at the risk of local oppressive governments learning of certain individual’s attendance. A strict no photography rule was set in place in addition to the Chatham House rule (not referring to identities in referencing quotes or points individuals made) for note taking and general future discussion of the topics presented. Attention was also put on meeting other attendees through prioritizing sessions with discussion and collaborative activities. Session topics ranged from threat modeling through holistic risk analysis to community networks and the process of flashing routers to build a mesh. The entire festival offered a beating pulse of local perspectives on digital privacy and security while simultaneously highlighting the need for global collaboration in regards to building tools, advocating policy and strengthening communications within this community and beyond.

The concept of “glocalization” which permeated throughout the event was perfectly introduced to me in the first session that I attended at the festival; Glocalization for Noobs: How to Design Tools for a Global Audience where panelists discussed and advocated for integrating the process of translation more tightly into software development. They discussed the translation of software going beyond localizing text and taking into consideration the entire user experience from perspectives of various regions. While many products are marketed towards specific areas, most software is used globally, or at the least have potential for wider adoption and would benefit from the review of testers in various locales. Importance on focusing attention on region specific points of view continued throughout the event where a handful of meetups dedicated time to discussing the state of Internet security and surveillance in Latin America, Africa and the Middle East. Sessions also incorporated this focus recognizing and addressing the particular hurdles of regions. The session Network disconnections: Effects on Civil and Trade Rights included a short presentation on the regular disruptions in internet access people in Pakistan face and subsequent research followed by a general discussion about the broader topic of region-wide disruptions usually due to political pressure and what policy and economic arguments can be made in opposition. Other sessions focused on the general sense of considering global communities and allowing respective perspectives to be shared together. Privacy Across Cultures was dedicated to a discussion on what the impact of privacy and its absence has meant in various cultures beyond freedom of expression and focusing on more long term effects.

Beyond the diverse cultural representation at the event, there was also a wide array of representatives from tools, new and old. In one workshop session titled Deploy an emergency Libre-Mesh network with local services, we formed in small groups and flashed routers with libre-mesh to form a p2p network. It was one of the fastest and most simple efforts of flashing a router to build a mesh network that I’ve ever experienced – it took about 30 minutes total for all 7 groups (with a range of familiarity of flashing routers) to connect with each other. If mesh networks are something of interest to you or your community, I highly recommend checking out libre-mesh. Additionally, one of the evening’s featured a tool showcase of 15 technologies ranging from a service called Stingwatch for detecting and reporting locations of Stingrays (fake cellphone towers used by authorities for tracking individuals) to the more well known Freedombox (security and privacy focused software for personal servers). Unfortunately, I was not privy to this portion of the event beforehand and not aware of the status of the MVP launch, else I would have loved to participate and demo the SAFE network to the crowd. Alas, I was able to do so in a more intimate setting for a session of it’s own. Having attended the festival with the intention of presenting a more general session on improving communications on network topologies and ownership infrastructures (based on previous explorations of the topic), I was able to join several dozen others who created “self-organized” sessions which were added in the schedule as the week progressed. This session was much less interactive other than various questions from participants but because we have software to show now, I was able to finish the presentation with a successful demo of the SAFE Launcher and example app to a crowd for the first time!

Overall, the Internet Freedom Festival was a huge success from a personal perspective by highlighting a variety of topics from technology to communications and diversity. To achieve true internet freedom worldwide, we must consider localized efforts and understand that needs vary from region to region by listening rather than assuming. Digital security training has expanded throughout the world and understanding the array of obstacles that regions face will help us build better software. I feel confident that the SAFE network will be a strong example of building a diverse, global community (as we see it happening already) but also appreciate the strong reminder that this will happen much more efficiently if we put effort towards diversifying our perspective. While the MaidSafe core team has a regionally diverse team itself, community-based development and translation efforts will continue be essential if we want to make SAFE a truly global network. I really look forward to attending Internet Freedom Festival again next year with a proper SAFE network up and running while expanding my understanding even more to make the network accessible to more people (and hopefully capture a few other team members to attend as well).

Scrap The Snoopers Charter And Connect The Dots

This month, the UK Government produced it’s latest piece of legislation designed to provide intelligence agencies with unfettered access to all our data and communications. The Investigatory Powers Bill (IPB), affectionately known as the Snoopers Charter by privacy advocates, is the latest play in the long running debate about whether governments, should not only be legally empowered to bulk collect and surveil our data, but actually force companies to weaken the encryption they use to protect their user’s data, thereby enabling the Government to read it.

While privacy advocates will attack the legislation and rather predictably the government will defend it, the strange thing about the IPB legislation in general is that it will not and cannot deliver the government’s primary objective, which is in the words of the Home Secretary Theresa May, to ensure that  “…intelligence agencies have the powers they need to keep us safe in the face of an evolving threat”.


Mass surveillance doesn’t work

There is absolutely no evidence that supports the argument that the mass surveilling of data stops attacks, or catches terrorists and there is plenty of evidence to the contrary. For example, despite the individuals involved in the terrible attacks in Paris being known to security services, and the fact that France has already implemented their own mass surveillance legislation, the atrocities still took place. Similarly, those that took part in the Charlie Hebdo attack, and the men responsible for the shocking murder of Fusilier Lee Rigby in Woolwich, were all known to security services.

If we can’t monitor those flagged up to be potential terrorists, how do we expect to effectively monitor the many millions of regular Internet users?

Drinking from a fire hose

The reason that mass surveillance doesn’t work is that it is not the correct tool to prevent terrorism, and in fact, some experts believe it takes valuable time and resources away from more effective tactics. Well respected cryptographer Bruce Schneier suggests that data mining (sifting through large amounts of data looking for patterns) is effective when seeking well defined behaviour that occurs reasonably regularly, such as credit card fraud. However, they are less effective with very rare behaviour as the mining algorithms are either tuned to provide so much data that they overwhelm the system (some have likened this to drinking from a fire hose), or are tuned to produce less data and miss an actual attack.

Bruce Schneier illustrates this point in his book, Data and Goliath:

“Think about the full-body scanners at airports. Those alert all the time when scanning people. But a TSA officer can easily check for a false alarm with a simple pat-down. This doesn’t work for a more general data-based terrorism-detection system. Each alert requires a lengthy investigation to determine whether it’s real or not. That takes time and money, and prevents intelligence officers from doing other productive work. Or, more pithily, when you’re watching everything, you’re not seeing anything.”

Safer by removing protections?

Delving into the IPB more closely, specifically section 189 entitled “Maintenance of technical capability”, would enable the secretary of state to issue orders to companies “relating to the removal of electronic protection applied … to any communications or data”. Basically, the Government could demand end-to-end encryption be disabled, or replaced with a weaker form of encryption by the provider, enabling user data to be read. 

End-to-end encryption enables both the sender and receiver to encrypt and decrypt messages without the message content being available to an untrusted third party, such as the Internet Service Provider, or application provider. When we consider that one of the UK’s largest ISPs, Talk Talk, was just hacked, we can start to appreciate just how nonsensical this proposed solution is as end-to-end encryption would have ensured that none of the stolen information was readable. Should this law come into force, the law abiding citizens and companies of the UK would adhere to the IPB, becoming less secure in the process, while the terrorists and criminals illegally enjoy all the protections that well implemented encryption technology offers.

This doesn’t sound like legislation that will “…keep us safe in the face of an evolving threat”. In fact, with the removal of end-to-end encryption, the government is prioritising their ability to read our information over the security of our data, which in itself is curious, for as we know, mass surveillance doesn’t make us safer either.


Connect the dots…

So, if the answer to making us safer isn’t weakening encryption and hyper surveillance, what is it?

Simply, a return to traditional investigative work, using the tried and tested connect the dots approach suggested by experts. Specifically, following up reports of suspicious activity and plots, using sources and investigating other seemingly unrelated crimes, mixed with targeted surveillance. Many of the privacy advocates that have spoken out about this bill have not expressed a demand for privacy above all else. Rather, if we are going to use practices that clearly undermine our human rights there should be a clear benefit for doing so. As it stands, the proposed legislation and solution is not a sufficiently good reason. 

Image courtesy of Stuart Miles at

1.1 Billion Reasons Companies Should Encrypt Our Data

As the media pick through the details of the latest large, embarrassing and costly data theft, the current victim, TalkTalk, a UK public telecommunications company, are set for a difficult few months. With revenue of almost £1.8 billion, the company have had an as yet unknown number of their 4 million UK customer details stolen by a perpetrator that ranges anywhere from a 15 year old boy to Islamist militants, if recent reports are to be believed.   

While the post mortem that follows will likely establish the details, the company has already admitted that some of the stolen information was not encrypted. While this was clearly lax for a company that that has been targeted by hackers 3 times in the last year, it seems that under the UK’s Data Protection Act theyare not legally required to encrypt data. The specific wording of the act suggests that firms need only take ‘…appropriate technical and organisational measures…’.

Senior director of security at Echoworx Greg Aligiannis advised “The most concerning revelation from today’s news is the blasé approach to encrypting customer data. Security of sensitive information must be considered a priority by everyone, especially when the life histories of potentially millions of customers are at risk.”

ID-100304695TalkTalk are not alone, research by security specialists Kaspersky Labs suggest that 35% of companies worldwide don’t use encryption to protect data. Surprising given the harsh penalties for breaches. IBM estimates that the average data breach costs $3.8 million, with an average cost of between $145 and $154 per record, not to mention the untold damage to the reputation of the companies affected. When we consider that there were an estimated 1.1 billion records exposed during 2014, we can start to realise the extent of the problem.

With such significant repercussions for being hacked, one must question why encryption technology is not used more widely.

In almost all cases cost will be a factor. Encryption is not cheap. Procedures need to be implemented and maintained by specialist skilled staff and then rolled out through often very large organisations. Asset management, access controls, security incident management, compliance…etc…will all drive the cost, as will new hardware, such as encryption servers. Complexity is another issue that raises many questions: how will the encryption keys be managed? do we let our employees bring and use their own devices into the work place? is the chosen encryption solution compatible with other systems? and what about mobile device management? 

With the number of breaches rising every year and no legal obligation for companies to encrypt our data it would seem that we as individuals need better solutions. For storing data on cloud providers, for example, client-side encryption has existed for sometime that enables users to encrypt their data before it leaves their computer, meaning that companies like Dropbox or Google can’t read your data, although they can delete it. Similarly, the self-encryption component within the SAFE Network also encrypts all network data prior to it leaving the users machine and does so automatically as they upload a file.  Providing encryption that is easy to use and user friendly is surely the key to its wider use. 

However, as good as tools like this are for the storage of our files, we are unfortunately still reliant on companies to look after our personal information and bank account details as things stand. Legislation needs to be tightened up that pushes companies to be much more accountable and responsible with our data. It should demand that not only is our data encrypted, that sufficient policies and procedures are put in place to maximise its effectiveness, as without these, even the strongest encryption can be rendered useless. Providing a high level of data security is simply the cost of doing business, not a nice to have feature.

Events like the TalkTalk hack should also remind us how nonsensical recent Government suggestions that we should ban or attempt to weaken encryption are. It is one of the best lines of defence against adversaries and with its use in all types of commerce, underpins the global economy. 

Image courtesy of David Castillo Dominici at

Net Neutrality and the SAFE Network

Per Wikipedia, “Network Neutrality” is the principle that all Internet traffic should be treated equally. Essentially, the concept calls for a level playing field for all users, whether they be individuals or providers of high-bandwidth services like Netflix, HBO, etc., or whoever.

From an egalitarian perspective, the answer as to whether net neutrality is a good idea or not is easy: of course, it is good.

From the perspectives of companies which wish to provide high-bandwidth services, as well as the customers who are willing to pay extra to receive those services in the best possible quality, the answer is not so clear. Why, for instance, can’t such services as Netflix, and thus the customers who use them, pay more so that high-access-streaming channels can be built up, taking traffic off of the other lines, thus being better for everyone? Whether this is a legitimate characterization of the argument or whether that’s how it would work out in the real world, one can see that the subject is a little more complicated than the simple idea of equal treatment. And it’s a lot more complex than that, even on a technical level.

Throw in the power (and corruptibility) of government agents, the obligation of service providers to make a profit for their shareholders and thus purchase government agents to gain advantage, the carelessness of the general population to pay attention and judge a complex issue, and all the other factors in the mix, and we’re left with a frustrating mess that seems to have no solution. I, myself, can and have argued the matter from both sides with equal passion.

But what if we take a few steps back and look at the assumptions which might be making all this truly unresolvable based on the current debate? Let’s try.

Let’s start by going all the way back, to consider what the nature of a truly neutral medium would have to be.

Put in somewhat simpler terms, let’s look at net neutrality as the effort to arrive at a truly neutral medium, in which no one is discriminated against BY THE MEDIUM ITSELF based on the quality or quantity of the communication they wish to engage in.

To get an idea of what I mean by this, let’s draw a comparison to another vital medium through which much communication travels, with which we all have vast experience, and which is a truly neutral medium: air.

You and I can stand across from each other and say whatever we like, be it loving or vile, and the air does not care. The air dutifully does it’s job of passing the sounds along from place to place. I can shout to a vast audience, or equally from a lonely mountaintop in a vain (or not) attempt to be heard by the gods: it’s all the same to the air. I can throw flowers or paper airplanes or bullets and the air can have no moral judgement as to which should pass and which should be stopped. There are only the physical dynamics of the different objects, velocities, temperatures, etc., which determine the flight of each. The air does not have the ability to say, “The flowers are good, so they should have easy passage, but the bullets will only be passed on slowly and reluctantly, if at all.”

Now, if you insult me deeply using the air as a medium, I may decide to retaliate with a blow to discourage you from doing so again. The air will discriminate towards my response only based upon whether my hand is open or closed, the slap coming slightly more slowly because of the difference in air resistance. If you insult me from cover, disguising your voice, I’ll have a harder time discouraging you, but the air doesn’t know or care.

The air does not discriminate as to who breaths it. Saints and sinners, people of peace and war, good intentions and bad, all breath it with equal ease, depending upon their capacity.

Perhaps you get my point by now. In our current society what we commonly refer to as “the Media” is not such a neutral scene. It is common knowledge that news organizations have had a stranglehold on the dissemination of “news” and have used it for decades, in conjunction with government and corporate interests, to color the view of the world for populations at large—i.e., propaganda. This is basically because the means of communication have been very centralized and subject to control. Radio waves are neutral media, but access to them by the general population has been limited by both technology and (more profoundly) centralized political and economic force.

The Internet, as it has come into use, has served as a much more neutral medium. Currently, legacy news and propaganda channels are dying the slow death, as upstart bloggers and videographers apply “death of a million cuts,” exposing their biases and agendas, and delivering information that users find more relevant to themselves and more truthful. Politicians and others in positions of power are losing ground as the power has shifted toward individuals, who can more easily determine when they are being lied to.

But the current structure of the Internet, while better in many ways than anything which has existed before, does not make for a truly neutral medium. Actually, while it makes the shift toward individual freedom of expression much more accessible, it also exposes the individual to liabilities which have never been faced before in all of human history. Exercise of the apparent freedoms comes at the expense of privacy and security of the individual, which ultimately undermines the very freedoms which are apparently being gained. Predictive technologies based upon all the data gathered on individuals and groups make the possibilities of social manipulation and control ever more possible by fewer and fewer individuals.

One doesn’t have to look further than the vast revelations which have been made in the last two years by way of Edward Snowden’s disclosures (whether you gauge them heroic or sinister) to appreciate the velvet glove and iron fist with which the surveillance corporation/state is enclosing the broad population.

There is an apparency of great freedom. But at what cost and how true is that freedom?

(I’m reminded of the great cultural revolution in China, in which Mao said “Let a thousand flowers bloom.” Dissidence and counter-revolution were, for a time, encouraged. Then, once the the trouble spots were identified, millions lost their lives. I’m not suggesting that this is necessarily the course of western civilization, but there is a very large history lesson here to be considered.)

So, let’s look back now on the concept of net neutrality. Is net neutrality even remotely possible with the current structure of the Internet? Are we dealing with any sort of neutral medium? I’d have to say no. Therefore, all the social uproar and political action to get agencies and companies to play nice is of little if any use.

Bitcoin and a number of other decentralizing technologies show some hope, but I’d have to say that they are well behind the curve and are likely to be of only marginal utility in securing greater actual freedom for individuals.

Enter the SAFE Network.

Now, I could easily, and justly, be accused of being fanciful on this score, since the SAFE Network is yet to go live and prove itself. But I can’t help myself. The promise is too great and the vision too clear to let these things sit. The more people who see the vision and help bring it to fruition, the better. Even if we fail.

So, before we examine the SAFE Network, let’s look back at the concept of a neutral medium and examine what the elements of a neutral data storage and communication network would have to be.

1. Secure by default. Anyone who accessed it would be able to do so without compromising their financial or data security. This means that individuals would also have complete personal responsibility for their personal and financial data. Sharing it would be an explicit choice.

2. Privacy by default. Anyone accessing the network would be able to have confidence that whatever they did on the network would be completely private, by default. Any choice to share any private data, even their identity, would be an explicit choice. The exposure of private data shared with another person or group would be limited to the worthiness of the trust placed in those receiving that data. Ideally, there would be capabilities of proving valid identifiers cryptographically without having to share actual identity details, unless necessary or desired.

3. Broad access. It could be freely accessed by individuals with very little technical barrier, and no one could deny use of the network if the individual could pass those technical barriers (i.e., a computing device and internet access).

4. Morally neutral. The network could not be subject to central control as to who uses it or the content of the communications, or data stored or retrieved. (Parallel to the air analogy.) The network would handle all of its standard functions of passing and storing data particles with no means of distinguishing amongst them, except to know what to do with them. This would require that the network be composed of nodes provided by users on the assumption that to have the sort of network desired, it is necessary to supply resources to the network to accomplish its purpose, rather than trying to control it.

5. Resistant to compromise. If compromised, no node in the network would be able to adversely affect the operation of the network at large. If it were compromised, it could reveal no useful information about the network itself or its users.

6. Scalable. Heavy demand for particular services or items would not require the building of separate centralized infrastructure, or use of methods which could discriminate for or against certain traffic. In other words, for a website or video or service which is in high demand, the network would simply deliver it up faster, the more demand there was, and then return to more usual handling when demand slacked.

I’m sure there are other attributes which could fit in this picture of a factually neutral Internet structure, but that’s probably enough to make the point.

These characteristics, and many more, actually ARE characteristics of the SAFE Network as it has been designed and proven-out over the last nine years by the folks at Maidsafe.

Will it work as the design and tests so far promise it will?

Will it fulfill the promise perceived by supporters like me?

Will it, in fact, be a truly neutral medium, where “net neutrality” can actually exist?

We will soon see.

This article was reposted with the permission of author John Ferguson. The original can be found on his site The Crossroad of Project SAFE.

The Next Generation Sharing Economy

The press has been deluged in recent times with reports about the surveillance and eaves dropping employed by intelligence and security agencies around the world. Barely a week goes by without a new revelation being announced. But it cannot be ignored that surveillance is big business and this is in fact the way in which many of the largest Internet companies make the vast majority of their revenue. These businesses don’t call it surveillance of course, they call it advertising.

Google generated revenue of $66 billion in 2014, 89.5% of which came through advertising, while Facebook generated 88.5% of their $12.5 billion revenue via the same source. As I said, selling access to us and our data is big business. American cryptographer Bruce Schneier summed the situation up very nicely advising that ‘surveillance is the business model of the Internet’. Informing us, as others did before him, that we are not the customers of these services, we are the product itself, the advertisers are the customers.

This model has been the dominant force for a number of years, but is it all the fault of the Google’s and Facebook’s of this world? Can we lay all the blame at their door? Maybe in part, but we were the all too willing recipients of the ‘free’ services. I suspect that only a small minority of us stopped to think that the seemingly complementary search, mail, maps and social networking platforms came with a higher price. Although most of us didn’t realise the extent of it at the time that price was and is our privacy.

But it is not only our freedom and liberty that is at stake. Our economic well being is also at great risk. This may seem a counter intuitive statement at first glance, how can free services be bad for our economic situation? It may be bad for our privacy, but surely it’s good for my wallet!

Companies such as Google and Facebook act as central intermediaries between us and our data. They pay a substantial amount on infrastructure costs (servers, data centres, support staff…etc…) which they use to host their platforms and our data. But when you think about it, how much information and useful content do they actually produce? The answer you quickly realise is very little. They are in fact aggregators and organisers of other people’s content.

So if you think about YouTube (owned of course by Google), Facebook or Google search, it is not their video’s, feeds and search results being shown, they are ours. We are the content creators here, yet how much are we financially benefiting from this arrangement? Very little of that $66 billion is coming our way.

So, this centralised architecture also centralises who benefits financially from the Internet, creating what technologist Jaron Lanier (in his excellent book Who Owns The Future terms ‘a winner take all economy’. Consider the following chart.


What this depicts is a minority of earners taking the vast majority of the revenue, concentrating the power and wealth in the hands of just a few. This is what the centralising architecture of the Internet currently creates and this is the economy we continue to choose to support when we use these services.

Our desire for free (in the monetary sense) content and free services will lead to an ever greater sphere of influence being harnessed by an elite group of people and companies, making decisions that improve their own financial well being, not ours. It should not escape our attention that many of these large Internet companies are public, significant as the law dictates that their duty is to increase the profitability of their shareholders which is likely only to exacerbate the problem.

As Lanier points out, there is an alternative to this winner take all economy, however, one that will improve the online distribution of wealth. That is to reward everyone that contributes to the Internet, directly compensating the content creators rather than channeling the incentives into the hands of just a few. It would seem like a much fairer and more sustainable system where we provide value to those who create it, rather than those who aggregate it. Don’t get me wrong, aggregation and the hosting of others data is a valuable service, but it should not receive 100% of the reward.

In this context, content creators could be a: blogger, journalist, artist, film maker, musicians, application developers, even end users with a social network account and a video camera. As my colleague Paige Peterson pointed out in an earlier post, the advent of crypto currencies like safecoin, with their almost zero transaction fees, enable almost instant micro payments and donations to take place.

Bloggers could be paid or tipped by users as their posts are read and enjoyed. News websites could function in the same way, or they could charge a subscription for providing well researched and useful information. Potentially, artists and film makers (this also includes those sharing funny home clips on YouTube) could make use of the SAFE Network’s optional watermarking system to ensure that they are rewarded as the originator of content and continue to be rewarded as snippets of their song or film are built upon and used by or aggregated by others. Digitally recording the content creators (through an anonymous ID) of each piece of work will enable the network to manage and pay out rewards without human intervention and without corruption.

Some content creators may earn slowly at first, but as their content is used over and over again and accessed by a global network of consumers, their income will grow. This will give rise to a possibility of migrating from a ‘winner take all economy’ to a more bell shaped distribution of wealth (depicted below) where income is much more evenly spread amongst a greater volume of earners. In this paradigm, power and wealth would not be focussed toward the elite minority creating a next generation sharing economy.


We may also find that under this new model the content itself will start to improve. The act of paying for something would increase user expectations and we would demand better material from creators who potentially have more time and effort to devote to their chosen area as the ‘real’ jobs used to pay the bills are no longer required.

You may be reading this in agreement with the concept of paying for content and the benefits it may bring, but believe the mindset and economic shift required to make this transition too big and unrealistic. Why would people voluntarily start paying for something that we currently get for free? If I was suggesting that people pay with existing FIAT currencies then I would be inclined to agree. But if users were rewarding content creators with safecoins that they earn by contributing their spare computing resources to the network then I believe you have something very exciting.

Safecoin will have a very low barrier to entry, making it possible for anyone with a standard commodity PC and an Internet connection to earn them. The SAFE Network also enables micro payments to occur at network speed with almost zero transaction fees. Content creators can then convert the safecoin they receive into another crypto currency or even into cash using decentralised exchanges

With the SAFE Network coming to fruition in the not too distant future, I anticipate that it is the beginning of the end of our reliance on advertising and surveillance as a business model. The technology companies of the future will be fairly compensated for the service they provide (1 x safecoin per hundred searches, or 1 safecoin for every 20 posts, for example) on a new Internet where creating valuable data will become the new dominant business model in an economy in which we all share.

Consensus Without a Blockchain

A key requirement of distributed computer networks are consensus systems. Consensus systems enable a specific state or set of values to be agreed upon, without the need to trust or rely upon a centralised authority, even if nodes on the network start to fail or are compromised. This ability to overcome the Byzantine, or Two Generals Problem makes effective consensus networks highly fault tolerant.

Approaches to reaching consensus within distributed systems are likely to become an increasingly important issue as more and more decentralised networks and applications are born. IBM’s paper, Device Democracy, confirms that big blue envisions the computing platform powering the Internet of Things will be decentralised. A view that further validates what the Bitcoin and decentralised computing fraternity have known for some time, that decentralised networks offer an efficiency and robustness simply not possible in centrally controlled systems.

Why Not Use a Blockchain?

Almost all consensus based systems with the crypto currency community, Bitcoin included, use a blockchain. An immutable, append only, public ledger that maintains a database of all the transactions that have ever taken place. While the benefits this ledger provides are advantageous for many different types of operation, it also comes with its own set of challenges.

One of the most commonly cited issues is the problem of scalability. Specifically, that in order for the network to reach consensus, this increasingly large and centralising file must be distributed between all nodes. In the early days of the network this wasn’t such a major issue, however, as the network continues to increase in popularity the Bitcoin blockchain is now a 27GB file that must be synced between the network’s 6000 plus nodes.

So, if we move the concept of consensus into the context of a decentralised data and communications network, we can start to evaluate how effective the existing bitcoin consensus mechanism would be.

Within a data network, if an end users makes requests via the client, they expect to be able to set up their credentials, store or retrieve their data instantaneously and know that the operation has been carried out correctly. In essence, they require network consensus to be reached at network speed (in fractions of a second), a tricky problem to solve in a large decentralised network. Within the Bitcoin network, this first round of consensus occurs after 10 minutes, with each further block consolidating the transaction. This transaction speed is clearly unacceptable in a data network environment and any attempt to increase block speed to circumnavigate this issue will have significant negative consequences on security.

Close Groups

What we need is a decentralised consensus mechanism that is both fast and secure. But, how do you reach consensus rapidly on an increasingly large group of decentralised nodes without compromising security?

The answer lies within close groups. Close group consensus comprises utilising a small group, a fraction of the network size, to state a position for the whole network. This removes the need for the network to communicate with all nodes. Within the SAFE Network the close group size is 32 with a quorum of 28 required to enable a specific action to be taken. An example may help explain this point.

Lets assume that Alice was to store a new photo. As Alice stores the image it is encrypted and broken up into chunks as part of the self encryption process and passed to a close group of Client Managers. This close group are comprised of the closest vault IDs to the users vault ID in terms of XOR distance. This is distance measured in the mathematical sense as opposed to the geographical sense. The Client managers then pass the chunks to thirty two Data Managers, chosen by the network as their IDs are closest to the IDs of the data chunk, so the chunk ID also determines it’s location on the network.

Once consensus is reached, the Data Manager passes the chunks to thirty two Data Holder Managers, who in turn pass the chunks for storage with Data Holders. If a Data Holder Manager reports that a Data Holder has gone offline, the Data Manager decides, based on rankings assigned to vaults, into which other vault to put the chunk of data. This way the chunks of data from the original file are constantly being monitored and supported to ensure the original data can be accessed and decrypted by the original user.

A client trying to store a piece of data that currently has no consensus because they don’t have enough safecoin, for example, would be refused.

Cryptographic Signatures

This use of close group consensus is not used for every operation however. Utilising this complexity for every operation would be inefficient and put an unnecessary load on the network. Close group consensus is only used for putting data on the SAFE Network, such as a file, a message, or in time, computation. For making amendments to data, such as changing the contents of a file (creating a version), or sending a safecoin to another user, cryptographic signatures are used to authorise the action.

Cryptographic signatures were first conceptualised 50 years ago and products containing their use were first widely marketed in the late 1980’s. The SAFE Network utilises RSA 4096, a very well established and tested algorithm.  Cryptographic signatures mathematically validate the owner of any piece of data and can prove this beyond any doubt, provided the user has kept their private key safe. If the end user is the owner of any piece of data and can demonstrate this fact, by digitally signing their request with their private key, the network permits them access to change the data.

Avoiding the double spend

The lack of blockchain raises one additional question though. With a distributed immutable ledger, how do you eliminate the problem of the double spend? This is something managed so effectively by the Bitcoin network, which verifies each and every transaction and records it on the blockchain.

Safecoin, the currency of the SAFE network, is generated in response to network use. As data is stored, or as apps are created and used, the network generates safecoins, each with their own unique ID. As these coins are divisible, each new denomination is allocated a new and completely unique ID, highlighting the importance of having a 2 ^512 address space.

As the coins are allocated to users by the network, only that specific user can transfer ownership of that coin by cryptographic signature. For illustrative purposes, when Alice pays a coin to Bob via the client, she submits a payment request. The Transaction Managers check that Alice is the current owner of the coin by retrieving her public key and confirming that it has been signed by the correct and corresponding private key. The Transaction Managers will only accept a signed message from the existing owner. This proves beyond doubt that Alice is the owner of the coin and the ownership of that specific coin is then transferred to Bob and now only Bob is able to transfer that coin to another user. This sending of data (coins) between users is contrary to the Bitcoin protocol where bitcoins aren’t actually sent to another user, rather bitcoin clients send signed transaction data to the blockchain.

This is not an attack on blockchains

The lack of blockchain means that it is not possible to scrutinise all the transactions that have ever taken place or follow the journey of a specific coin. In this respect safecoin should be thought of as digital cash, where only the existing and previous owners are known. Users who value their anonymity will see this as a distinct advantage and it should not be ignored that this enables an unlimited number of transactions to occur at network speed.

However, this blockchain-less (yes a new word) consensus mechanism does not provide the scrutiny and transparency desirable in some financial systems, or in general record keeping. This raises an important point. It is not the intention here to suggest that one consensus mechanism is better than the other, or that there is one consensus mechanism to rule them all. Instead, we should consider that each is better suited for a different purpose. This is why the incredibly bright future for distributed consensus will come in many forms, with each iteration better than the last.

To find out more about the SAFE Network and how it works, please visit our System Docs.