Developer Case Study: Project Decorum

During the course of 2016, MaidSafe have been privy to a number of projects that are building on top of the SAFE Network. One such project is Decorum.

What is it?

Project Decorum is currently a research-led project, run by Harmen Klink, a computer science undergraduate at the HU University of Applied Sciences Utrecht in the Netherlands.  He wants to build a social media platform, which gives the user greater control of his or her data and therefore enhanced privacy – rather than today’s model which is centralised around a few service providers.

Project Decorum is currently a proof-of-concept, which Harmen has designed in order to drive a successful crowdsale, which raised over €400,000.  He is aiming to use this investment to further develop the application, aspiring to create a hybrid of the best features of existing major applications, such as Facebook, Reddit and Twitter.  

How does it work?

The core protocol of Project Decorum is a substitute for the missing central coordinator, because the SAFE Network has been designed on the principle of a “serverless” architecture.  It consists of a set of rules that describe where and how conversational data should be uploaded to the SAFE Network. These rules predict where the replies to a particular message on the SAFE Network might end up, no matter where the original is located. This means that all applications and SAFE websites that use this protocol will be compatible with each other, making communication simpler.

On the data level all information is visible and the protocol will organise conversations in a tree structure, where every node of the tree represents a message from a user. Replies to earlier messages will create new branches. This tree structure lends itself well to be represented in a “threaded” format, which is done by many well-known forums and comment plugins. Users will build a user interface to decide what data they see and can create a new root to start a new tree for a new conversation. This can be used to create a forum, a comment section on a blog, a group chatbox, and so on.

In Project Decorum users will own their data and everyone is their own moderator through the use of personal ignore lists. In principle, particular posts or users can be put on such an ignore list. It is also possible to subscribe to one or more ignore lists run by other people. This allows for dedicated and widely accepted moderators to naturally rise up in their respective communities. Active people with sound judgement will be subscribed to as moderators by groups. These people can also collaborate to form a moderator team, and possibly accept donations or even charge for their moderation services. Multiple teams with different rules can be active in the same community if there is demand.

Why is Project Decorum working with MaidSafe?

Harmen chose the SAFE Network for his project for several reasons.  He believes the privacy and security of the platform should be the pre-requisite for any Internet application.  Furthermore the decentralised model offers great scalability and he has found it hard to overload the system.  Additionally, SAFEcoin is a great feature, because of the way it is integrated into the network and offers instant rewards.  This will help to sustain engagement with the platform, as social payments are an important feature increasingly expected by users.  It also offers developers the flexibility to expand tokenisation of other assets to create a crypto-currency to represent all kinds of assets.  

What’s next for Project Decorum?

The next steps for Project Decorum include working on designs to make them more tangible and figuring out the business model.  As APIs for the SAFE Network become available and more stable Harmen will continue development on the protocol.  MaidSafe hope that features such as the automatic reward mechanism for participants will enable Harmen to further develop the usage model for Project Decorum.

Harmen Klink, Founder, Project Decorum

“I believe having access to multiple identities is an important benefit of the SAFE Network, because it reflects the varied identities and roles we play in our personal and work lives. The network of identities forms a web of trust that can be used to distinguish legitimate users from abusive bots. When a real name is coupled to an identity, the strength of the web of trust is also used to show others the likelihood that those two truly belong together. This protects users from becoming victims from impersonification and identity theft.”

Structuring Networks with XOR

A prerequisite to understanding the SAFE Network on a technical level, including the consensus process, requires knowledge of the underlying structure which powers it as a decentralized, autonomous system. Peer-to-peer networks can be categorised into two general types: structured and unstructured. In unstructured networks, there is no explicit organization of nodes and the process of joining and forming connections to others is random. In structured networks, like those which use a distributed hash table (DHT) such as Bittorrent or the SAFE Network, the overlay protocol includes a structure for organizing the network and makes purposeful connections to specific nodes more efficient.

One of the most widely adopted DHT’s, named Kademlia, was originally popularised through its implementation in Bittorrent’s Mainline DHT which removed dependence on central trackers for finding the locations of nodes and data stored on the network. Kademlia employs a rather simple operation called “exclusive or” (XOR) to establish a mathematical relationship between node identifiers. While SAFE uses a modified version of Kademlia, the XOR operation is consistent across implementations and understanding this equation will give insight to all networks based from Kademlia.

Comparing Bits

To best understand how XOR facilitates a structured, p2p network, let’s start from the very basics of the operation which at its foundation, compares two inputs and then outputs their difference. The input numbers used are binary, meaning they are made of only 0’s and 1’s. The mathematical symbol for a XOR operation is ⊕.

To show the simplicity of calculating the XOR output of two binary numbers, let’s first look at an example with fairly large numbers as inputs:

Input A: 11001010011010100100010101011110
Input B: 01001000101110011010011111000101

Now, to find the XOR output, simply compare the bits (a bit is a single digit in a binary number) individually and in order. Where the bits are the same, place a zero (0) and where the bits differ, place a one (1).

The table below shows the calculation of the 32-bit inputs we chose where input A is the first row in yellow, input B the second row in blue and the XOR output last in green.

1 1 0 0 1 0 1 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0 1 0 1 0 1 1 1 1 0
0 1 0 0 1 0 0 0 1 0 1 1 1 0 0 1 1 0 1 0 0 1 1 1 1 1 0 0 0 1 0 1
1 0 0 0 0 0 1 0 1 1 0 1 0 0 1 1 1 1 1 0 0 0 1 0 1 0 0 1 1 0 1 1

 

Since the first bit in input A is 1 and the first bit in input B is 0, the XOR output for that digit is 1. Meanwhile, the second bit in both numbers is 1 so the XOR output for that digit is 0 and the third bit in each number is 0 so the XOR output for that digit is also 0. By comparing each digit down the line as the same or different, we arrive at an XOR output of 10000010110100111110001010011011. The decimal conversion of that value is 2194924187 which is not such a straightforward calculation, however, it can be helpful to know how the pattern of binary counting works:

0=0
1=1
10=2
11=3
100=4
101=5
110=6
111=7
And so on...

Properties of XOR

Now, to get a grasp on the usefulness of XOR calculations, let’s take a step back and focus on 1-bit numbers as our inputs.

XOR operations on 1-bit numbers (0, 1)
Input A Input B Output C Operation A⊕B==C
0 0 0 0⊕0==0
0 1 1 0⊕1==1
1 0 1 1⊕0==1
1 1 0 1⊕1==0

 

Using the table above (which shows every possible combination of those values), we can see that regardless of the input values, if they are equal to each other, the output is zero. Alternatively, if input values are not equal, the output is a non-zero value (1 in the case in 1-bit values). The last characteristic we can gather from this table is that if we swap A for B then C stays the same which in mathematics is called commutative and can be expressed as:

if A ⊕ B == C therefore B ⊕ A == C

Furthermore (but a bit more difficult to tell in this simple example), if we swap A or B for C, the new output will be the value which C replaced and can be expressed as:

if A ⊕ B == C therefore C ⊕ B == A and A ⊕ C == B

We can now observe how these properties hold true with slightly larger binary values.

XOR operations on 2-bit numbers (00, 01, 10, 11)
Input A Input B Output C Operation A⊕B==C
00 01 01 00⊕01==01
00 11 11 00⊕11==11
01 01 00 01⊕01==00
01 10 11 01⊕10==11
10 11 01 10⊕11==01
11 01 10 11⊕01==10
11 11 00 11⊕11==00

 

Using the table above (which only shows a sample of possible combinations) we can still see that equal inputs give an output of zero (00), unequal inputs give a non-zero output and the property where swapping any of A, B or C for each other in the operation holds valid (highlighted in coloured rows). As the binary numbers grow larger, these characteristics of XOR operations will continue to hold. Additionally, we can deduce that the XOR output of two values (also called XOR distance) A and B will always be unique for those inputs. In other words, there is only one value B at a given distance C from given value A which can be expressed as

if A ⊕ B == C then never A ⊕ B == D and never A ⊕ D == C

XOR Relationships in Networks

With basic understanding of XOR characteristics, let’s now explore how it maps onto a peer-to-peer network using binary tree graphs.

sm-binary-trees

The two graphs above illustrate the simple tables we used to explain XOR properties with the left side being a 1-bit network (two nodes) and the right side, a 2-bit network (four nodes). Within each graph, a step to the left at a vertex point adds a zero (0) bit to the end of the number and a step to the right adds a one (1).

big-binary-tree

For better understanding of these properties in larger networks, the graph above shows a 5-bit XOR network consisting of 32 nodes (00000 to 11111 in binary, 0-31 in decimal) and follows the same vertex stepping rule. The two blue coloured nodes, 12 (01100) and 15 (01111), have an XOR distance of 00011 (3 in decimal) while the two orange nodes 16 (10000) and 31 (11111) have an XOR distance of 01111 (15). However, even though the blue node 15 and the orange node 16 are next to each other in the line, their XOR distance is even larger at 11111 (31) and shows that XOR distance does not follow the same distance assumptions as we are used to. Since every node has a unique distance to every other node in the network, it becomes quite simple to reliably relate them to each other using this property and therefore finding nodes that are closest to each other is straightforward by doing an XOR calculation on a node ID and the smallest distances (think back on the property of swapping values in XOR calculations).

Say we want to find the closest 4 values to the input value of 01010 (10) then we can XOR the input value with the 4 closest non-zero distances 00001 (1), 00010 (2), 00011 (3) and 00100 (4).

Inputs 01010
⊕00001
01010
⊕00010
01010
⊕00011
01010
⊕00100
Output (11) 01011 (8) 01000 (9) 01001 (14) 01110

 

Now, if we take one of those closest values, say, 01110 (14) and again find the closest 4 non-zero values to it, we get a unique set of outputs.

Inputs 01110
⊕00001
01110
⊕00010
01110
⊕00011
01110
⊕00100
Output (15) 01111 (12) 01100 (13) 01101 (10) 01010

 

Since the XOR distance between a particular value and every other value is unique, the closest value set will also be unique. With that in mind, imagine a network using XOR calculations to relate node IDs where each node establishes connections with and stores data about their closest nodes. By communicating with the group of nodes closest and asking what each of their closest nodes are, any single node can eventually locate any other node in the network creating a distributed database.

Decentralization Requirements

It is extremely important in XOR networks that each node has a unique ID. In decentralized networks where there is no central party to issue and enforce unique IDs, this task requires a large enough ID range (ie. with 128-bit or 256-bit numbers) to reduce chance of overlap and a secure hashing algorithm to prevent predetermination of a node’s ID and therefore a node’s placement in the network. Due to the necessarily high ID space and random placement, decentralized networks will not have nodes occupying every value in the ID range and therefore, the closest nodes are most likely not going to be one of the closest 3 nodes like the example above. Regardless of what the actual closest nodes are, however, the relationships between them allows for each node to maintain a narrow perspective of the network and use their established connections for scoping beyond that when needed. This type of relay network makes it easier to discover data and route messages in a targeted but decentralized manner.

This concept is the foundation for Kademlia-based distributed hash tables (understand the name better, now?), including those used in Bittorrent and SAFE. In Bittorrent, this discovery pattern allows nodes to discover which other nodes are storing a particular file. In SAFE, the data on the network is identified with IDs in the same range as nodes so that the data itself can be mathematically related to nodes for storage purposes in the same way nodes are related to each other. XOR closeness is the basis for the SAFE Network’s consensus processes to ensure data reliability and security. This will be covered in more detail in a future post now that we have established an understanding of XOR properties in structured, p2p networks.

Stable Droplet Network

News For App Developers

In the pretty aggressive recent networking tests though we left app developers a little behind. There was no secure way they could use the APIs and get more involved in the RFC / API process for applications (surely one of the most important aspects of SAFE). To improve matters, there we are splitting continuing tests into Vault tests and Client tests. We will continue at least one more vault test soon, but in the meantime we will run a stable network on the Digital Ocean droplets, starting from today. This client test network will not be to test anything to do with vaults and be reserved for app developers to work against something stable.

Please download the launcher here and the demo app here

Updates for launcher include :

  • Updated logs to provide more useful information.
  • Fixes for self encryption crashing apps on network full (you probably will not see this, it is a low balance error.
  • The client still does not report this effectively, but it will happen during the RFC process below)

Also, the Launcher is looking at some significant improvements as per this RFC.

The other community members are more than welcome to keep participating in the vault tests, as this is a huge help (we cannot emulate humans or real world internet connectivity). At the moment we really really do not want anyone to build vaults to run against the client test network (otherwise we need to use resources to block that via changes in code).

Along with all of this, we will be doing some more PR activities very soon which will enable the project to move much quicker. Again, over the next few short weeks, we will hope to have lots of information in many aspects of the project moving forward. So technical, PR, structure, resources etc. will all get a bit of a push soon.

Thanks again folks, all of this testing is truly helpful and we are reacting to each and every test with a rabid ferocity to make adjustments and test again. Github issues and RFC’s are being looked at a lot, so please keep this up, no matter how small, every little thing helps to make this not only a success, but more importantly get it into people’s hands more quickly.

This is taken from a forum post made by David on the forum earlier today.

Theories on Incentives, Scaling & An Evolved Economy Pt 3

So far this series has explored how our increasingly digital world is both affecting and being affected by the progressingly global economy. Though this exploration, we can discover more scalable and sustainable foundations for building tools which better integrate these systems and help us in communicating these tools and their purpose for wider adoption. The history of money and trade can show us some possible outcomes of scaling economic systems and the influx of intellectual property on the Internet has stressed the need for progress. Thankfully, more recent experiments and innovations have provided stepping stones towards increasingly sustainable global economic models. Looking at the structures and incentives from various networks and analysing their effects has paved a path for the improvements built into the foundations of the SAFE Network and continuing to push these boundaries through SAFE and other projects will allow for more realistic and sustainable rewards for the labor behind intellectual property and subsequently the global economy.  (Disclaimer: Some references in this post assume having read the content from the first two parts of this series.)

Pushing the Boundaries and Building the Bridges

So it’s clear that scaling a system globally without spiraling downward into a cycle of centralization leading to control, power and increasing self interest is a really difficult task. One of the reasons I’m optimistic about the SAFE Network is the approach to reach a global scale by mimicking how nature scales itself while still attempting to work within the existing economy as an evolutionary step. Many global systems are already headed in the right direction by thinking about maintaining autonomy such that not any one party has control. Bitcoin, Bittorrent and Tor are all examples of systems which strive for autonomy through decentralisation as a form of security. With regards to economics, both Tor and Bittorrent networks are run purely on a volunteer basis and do not depend on a currency or credits within their systems. Users are incentivised to provide resources as nodes with the expectation that they receive value in return by others who also participate in the network, very similar to the previously explored gift economy model. Though, in bittorrent each node is additionally shown an upload to download ratio which allows users to gauge how many peer nodes have accessed content from them versus how much they’ve accessed from others. The visibility of this ratio offers a slight psychological effect on increasing incentives for nodes to stay connected and to continue providing their resources to the network, but show similar vulnerabilities in scaling globally as gift economies. Even without an explicitly integrated credit or currency, Tor and Bittorrent have found success in pushing the boundaries of our online economy to evolve away from business models based on tracking and forcing scarcity on content, a critical part in this evolution.

Alternatively, Bitcoin builds a bridge for the online economy as a currency which facilitates value transfer and storage outside of centralised third-parties. Unfortunately, the distribution of Bitcoin has driven the network to become more and more centralised in part by the force of economies of scale. The main benefit that Tor and Bittorrent gain from a lack of currency incentives is avoiding tendency towards centralisation based on participants attraction to accelerating wealth. The common and open ownership designs of each of these systems allow them to initially scale as global networks while the variance in incentive structures  provides models for sustaining their structures. Maintaining its diverse peer-to-peer structure is Bitcoin’s key separation from any other global payment network and whether it ultimately succeeds or fails, it is more clear that implementation of currency-based incentives brings different kinds of scaling problems to decentralised systems. Regardless, each of these networks provide a stepping stone towards future designs which can more desirably bridge this gap, and in particular the foundations we’re building in SAFE.

A Foundation for Ideas, Art and Code within An Evolved Economy

The SAFE Network is designed to both push the boundaries of IP ownership and tracking while still integrating a bridge to the current economy. The network is built on a foundation of anonymous and autonomous management of data storage, routing and integrity. It allows for any data such as a network token to be stored, transferred and confirmed in “localised” closed groups of nodes which access the minimal amount of data to do their jobs. When peers go offline or come online, the network reconfigures based on their positions in the ID space of the DHT. The network nodes are assigned IDs and thus position in a group at random to avoid predetermination of roles assigned to it. If the node successfully stores data and supplies it when requested, the “farming” incentive scheme will award it with a network created and managed token. It’s important to note that this incentive distribution allows for much more granular measurements of resource capability than block rewards in bitcoin and other blockchain cryptocurrencies. This more dispersed distribution scheme was designed to  prevent some of the natural centralization that is occurring with Bitcoin. With further understanding that economies of scale are still a strong force in our economy, SAFE additionally employs a sigmoid distribution rate which rewards network average resource supplies in order that nodes with drastically more resources don’t centralize storage of content. With this focus on granular incentives and rewarding average network resources in token distribution, we can improve on the system’s ability to integrate with the global economy while avoiding some of the key centralisation effects of economies of scale.

Further, the network is prepared to include incentives for application developers, core developers and content creators. By allowing individuals to opt into embedding payment addresses into applications and content, the establishment of a “creator” can be made for a baseline revenue in tokens from the same pool which rewards resource providers. It is the belief that these participants in the network bring value to the ecosystem and should be included in the distribution structure. The inclusion of diverse incentives are important experiments in finding the ideal bridge to a better economic model for developers, journalists, artist and researchers sharing their intellectual property.

To complete the basic ecosystem in SAFE, there is a replenishing method for the pool of safecoin which is distributed to these key participants. Payments back into this pool are linked to storing new data to the network. At its best, this recycling mechanism can allow for a perpetual supply of safecoin for these incentives and at the very least provide a way to extend the ability to autonomously pay members of the ecosystem. The token requirement from users uploading new content will help to prevent abuse by parties storing significant amounts of content who are not giving back by providing network resources themselves. This concept of balancing out consumption to providing a service is similar in theory to the ratio displayed in bittorrent clients but is enforced by the network through the token mechanism. So long as the network incentives align properly, the SAFE Network economy will establish a very basic common credit system that though it’s autonomous nature, should scale to support public intellectual property and offer alternatives to privatising software and content through physical or legal restrictions.

Extending the Foundation

The SAFE Network will not only integrate a basic infrastructure for sustaining a global economy but also relies on experimentation on top of it. While it would be great if SAFE could sustain itself with this economic foundation, we cannot assume that it will be enough. The essence of experimentation is an important factor to evolving systems (as nature shows us) but the historical control and extreme centralization of the economy has hampered these processes until very recently. New schemes being tested on today’s Internet will find a comfortable and more suitable home on top of SAFE’s economic foundation. Expect to see business models like pay-what-you-want and crowdfunding take on whole new capabilities with the support of safecoin and the autonomous, cryptographically secured network. Additionally granular approaches to digital consumption are ripe for better integration into the economy as well as methods for working with existing private intellectual property whose use may contain certain licensing restrictions. In particular, I envision methods similar to git’s distributed revisioning, but with integrated payments options adapted to include content like research and art. So much intellectual property is based on previous discoveries, thoughts or works and integrating that flow into a functional platform would be a worthwhile experiment to extend the SAFE economy into a more scalable network that sustains itself and supports inherent scarceless characteristics of intellectual property.

The impact of intellectual property is no doubt expediting our way to an economical evolution, but there’s still a long way to go before we can stop forcing digital content and general intellectual property to be artificially scarce as a solution for supporting the labor went in creating and maintaining it. By learning lessons from how our current economy has adapted to scaling (whether successful or not), it will be much easier to build experiments which fit the world of intellectual property and successfully support that labor in a global manner. Creating the proper incentives to allow for more free software development and open access to knowledge through implementing new solutions is the quickest way to build a more sustainable economy and show others the value in publicly owned, decentralized networks.

The SAFE Network Release Cycle

As you may have gathered from the even greater amount of activity on GitHub (I didn’t think it was possible either) the core of the SAFE Network has been getting tested both internally and externally as we get ever closer to a stable decentralised network.  While details about the team’s development progress continue to flow via the forum, the purpose of this post is to establish the main phases of the impending and iterative release process. There are:

  • Alpha – the core network and fundamental functionality is provided via an initial implementation, this is the first public testing phase.
  • Beta – alpha implementations are reiterated and improved, based on user feedback.
  • Release candidate – the fundamental feature set is stabilised to provide greater security, resilience and efficiency.
  • Release – privacy, security, freedom!

The speed at which MaidSafe will iterate through the alpha testing phase is unknown and will be dependent upon how well the network performs at this stage. However, it is anticipated that having the core network in place will make it significantly easier to test the addition of new features than ever before. Testing against mock networks is only useful up to a point!

There will be several alpha releases, which will commence in simple numerical order, each denoting an incremental improvement on the previous version. For example, as per the roadmap, alpha 1 will come with: public ID management, public and private data storage, vault configuration and desktop installers (64 bit Windows, Mac and Linux). The second alpha iteration will include additional features and will be called alpha 2, and so on.

SAFE Network Fundamentals

The fundamental features, beyond the alpha 1 release, have been defined as:

  • Contact management
  • Messaging
  • Test safecoin
  • Vaults running on ARM architectures
  • Launcher UI
  • Safecoin wallet

The alpha release will gradually implement this functionality in an iterative cycle and provide the features highlighted above. However, this will be the first iteration of these features and development on them will continue until the engineering team are satisfied that the implementation provides the desired functionality. At this point, the network will transition to beta. When in beta, these features will become more elegant, efficient and secure. The release candidate will see the features frozen and further stabilised prior to full release at which point safecoin will go live.

In tandem with this release cycle, both users and developers can expect the ongoing release of APIs that reflect access to ever increasing network functionality, as well as example applications that showcase features of the network to end users and also act as tutorials to developers.

Out of Beta and Moving Forward

Beyond release MaidSafe, either alone or working in partnership with other developers, will start to create some of features below that will offer both developers and end users access to some exciting new tools, such as:

  • Pay the producer
  • Smart contracts
  • Autonomous updates
  • Computation handling

We will provide you with more details on each release as it approaches and hopefully this post has been useful in providing more detail around our planned release cycle.

Theories on Incentives, Scaling & An Evolved Economy Pt 2

In the previous post of this series, I described ways in which human monetary systems have typically tended to scale and how it compares to the history of the Internet. Both systems are functioning with models which have incentives to centralise at scale which introduce vulnerabilities as dependence on the central points grow. Next, I want to extend this exploration to the exchange and ownership of property on the Internet and scaling intellectual property systems. Reviewing contrasts and similarities in physical property and intellectual property can help to shed light on the challenges we face in managing those systems as they grow – online and off. (Disclaimer: Some references in this post assume having read the content from the first part.)

Properties of Property: Personal, Private and Public

Having explored historical implementations of economies and their breakdown points at scale, we will find similarities to systems dealing with ownership of property. While cultures around the world have different perspectives on property, there are three basic categories in which to categorise ownership: personal, private and public.

Personal property are belongings actively used by an individual such as a toothbrush, a homemade apple pie or a wrist watch. Public or common property are objects or land openly used or consumed by any member within a community such as a common walking path or a common garden maintained by some or all of it’s members. Finally, private property is ownership usually in the form of real estate but can be seen more broadly as relatively unused property owned by a person or group as well as any property requiring significant external security (usually provided by state-based registration/titles). Examples of private property are banks, rental apartments and summer sailing boats. The lines between these categories can blur at times but I will not address those cases for the purpose of simplicity.

Abundance within Meatspace Economies

The distinction between private and public ownership models and their respective abilities to scale are important aspects in economies to explore. For example, a community where food and the labor to sustain the growth of food is abundant (perhaps out of love for gardening), the food itself is not very useful as an exchange of value as it is produced without distinct demand market forces. If one of these gardeners offers four apples to a blacksmith in exchange for a specific gardening tool, by normal market standards the trade doesn’t make sense since the produce is in abundance already and further will rot if not consumed. However, it would make sense for the blacksmith to view the gardener’s labor as an important value to the community and to support them by simply giving them the tool and continuing to eat the produce at their leisure. The two versions of this exchange may sound equivalent but the incentives and transactions have quite different characteristics as the system grows.

Scaling Public Gardens

The preferred type of trade in the above scenario of abundance, also referred to as a gift economy, can be related to the concept of tracking Rai stone ownership (which we explored in the previous post) in that it can sustain itself as the scale stays small but beyond that, a garden maintainer might realise the relationships with those consuming the produce have become less beneficial and thus are no longer able to depend on the precedent of established community connections to depend on gifts from others. At a larger scale, it makes sense for these producers to prefer working privately because the abundance is reduced when growing for more individuals while putting a price on the produce helps to guarantee their labor has a similar or greater rate of return and isn’t likely to be taken advantage of. This price can come in the form of credits (by tracking the number of fruits and vegetables individuals receive) or as a relative price to a more fluid value of exchange such as currencies. However, even when a privately exhibits improvements in ability to scale compared to commonly managed garden, it is not void of the further vulnerabilities brought on by economies of scale. Reducing the cost of input for a growing output is a natural tendency for business which at a certain threshold leads into a rather self destructive cycle of incentives towards more centralised control.

Properties of Intangible Property

Now that we’ve established a perspective on basic concepts in meatspace property, we can map them fairly accurately to intellectual property for a better understanding of the technology industry’s challenge to scale labor that goes behind open source development and public content. Intellectual property (IP) can also be categorised between personal, private and public. Personal IP are any ideas or data that we would keep to ourselves so only ourselves (and perhaps a very few others) could have access such as health records and personal thoughts. Public IP in contrast are ideas and information that are shared free and openly for anyone to access and use like weather reports, certain online academic journals and content within the public domain. Finally, intellectual property considered private would be data controlled for restricted use such as restricted software code or online publishing which require payment to view contents. To further sub categorise private intellectual property one can consider both those protected by avoiding physical reproduction such as DRM (Digital Rights Management) and those protected by legal security such as copyright and even free software licenses. While licenses like GPL and MIT promote open standards, the fact that there’s a restriction of use introduces aspects of private, owner controlled controlled IP. Not to say this is a wrong method, (I heavily stand by MaidSafe’s decision to release code as GPLv3) but in the context of these definitions and for the next part in this series, I think it’s important to keep this in mind.

Scaling Public Idea Gardens

So, with these distinctions there’s obviously a vast amount of private IP out there of all shapes and sizes as our society reacts to the globalisation of ideas. Patents and copyright systems have been put to use for several hundreds of years mostly aiming to resolve the labor that goes into creations with inherent lack of scarcity like blueprints to inventions and writings and generally give an economic advantage to the creator. Unfortunately, this sort of solution essentially puts a price on access to a piece of intellectual property and privatises ownership similar to the “once localised and looking to scale” community gardens but even more drastic. The physical limit for the abundance of intellectual property to be overcome is comparatively miniscule and shrinking further thanks to improving storage capabilities of computers. So like the some members of a community may be inclined to tend a community garden, others might selfishly enjoy creating intellectual property such as inventors and designers but as soon as their labor is easily taken for granted, the economical balance is disrupted. We should be able to support production of intellectual property at a small scale but the globalisation of communications and knowledge over the past several centuries has made that not practical.

Code and Content as Assets

Many activists involved in resisting DRM, copyright and patents often evangelise that private IP conflicts with long-term progress and inhibits essential freedoms. Finding economical answers to better serve production of intellectual property so more people are incentivised to choose sharing ideas publicly rather than keep them privately protected is certainly a difficult task and most likely will not be resolved via a single solution. Most mainstream research journals see the IP they publish as assets to protect in order to sustain their business and thus make the access artificially scarce by marking it with a price. Many VCs who fund software development similarly see the code produced by developers as a significant asset which protects against competition implementations. In some cases, there can be agreements between corporate competitors to build public solutions for standardising purposes but this is not a reliable solution and has potential to lead to bad implementations of those standards that many others must begin to rely upon. To integrate intellectual property into our economy properly, we must work to evolve the economic system itself rather than force ideas to take on characteristics which make them artificially scarce.

In the final post of this series, I will overview solutions which have pushed the boundaries of how we deal with money and intellectual property and more specifically, what SAFE brings into the mix. Scaling is the main factor in the breakdown that we see in many systems from currency and property to the Internet itself so focusing our sights on this problem is essential. While MaidSafe is working on a single solution to address these issues via an evolved Internet, the previous experimentation and future supplementary projects from other parties will be necessary to grow a real foundation for a global economy and digital society.

Theories on Incentives, Scaling & An Evolved Economy Pt. 1

After meeting lots of great people and learning more about interesting free and open source (FOSS) software projects at LibrePlanet last year, I had the pleasure of attending and speaking at the conference again this year. The Free Software Foundation has been hosting these events since 2009 where foundation members (and non-members) gather to discuss technology, legal concepts and the general philosophies relating to free and open source software (and hardware). It was also my second opportunity to see Edward Snowden give a remote keynote which in both cases clearly fill the room with excitement and renewed energy to defeat the insecurities and threats towards human rights brought on by mass surveillance and censorship.

Last year at the conference, I gave an overview of the SAFE Network and hosted a booth in anticipation of introducing people to the network while this year, I focused on the economics, incentives and distribution specifically regarding free software and digital content. I’ll expand on the thoughts presented there in a short series of blog posts here. In particular, economies of scale play a significant role in how our businesses and organisations have evolved to serve a global society. This centralisation of resources has enabled significant scaling properties which allows for conserving input cost while maintaining output growth. While this blog has featured many thoughts regarding the interesting economical aspects of building a new Internet from the infrastructure-based incentives to the inherent difference in consumption of physical products to digital products, there is more to explore in those thoughts if we want to consider and implement the best solutions. Explaining these concepts to free software enthusiasts was an invigorating experience for me and I hope that MaidSafe can continue to be an positive influence on this topic in this community. Sometimes it’s very difficult to see a future where money and value exchange systems work for the people instead of becoming monopolised and serving the special interests of those in power but by taking a step back, we can begin to see how current incentives promote centralisation and what new ways we can understand the nature of human economics and our perceptions on property and ownership to build tools for an evolved system.

Rai Stones & the Dynamics of Accountability and Distance

The Internet and monetary systems have very similar histories in terms of tendency towards centralisation in their reach for globalisation. If you weren’t aware already, the insecure and controlling effects of the server-based Internet are the main factors for the SAFE network’s design and development. However, understanding the history of economies throughout various cultures and their diverse solutions to issues around the transfer of property, ownership of services and general value management in addition to their common incentive characteristics are also key factors in building this evolved Internet particularly when it comes to scaling. In small communities and tribes where individuals have established relationships with everyone they interact with, it is relatively simple to sustain an economy and the trade of property and maintenance of common services. On the Island of Yak, for example, the inhabitants famously used Rai stones (massive, stationary representations of value used as a local currency) where members of the community simply kept track of their ownership when exchanges happened. This sort of system can work well within a small scale such that remembering ownership is not a problem, but as soon as this community is expanded past ability for individuals to keep track of exchanges, the Rai system will inevitably fall apart or centralise such that the responsibility becomes more efficiently handled by trusted third parties. Alternatively, the widespread adoption of gold and silver coins can be attributed to its ability to scale further. Instead of remembering ownership of immovable rocks, individuals can physically trade more granular representations of value and that value is more easily kept private. However, you can imagine that this anonymity feature in gold brings along problems of counterfeiting especially as the distance of trade grows and trust between the parties in an exchange is lost. So even with increased scaling characteristics in physically traded tokens, establishment of community banks around ownership, security and purity of gold would eventually inserted themselves as efficient third-party solutions.

The Downward Spiral in the Nature of Scaling

Now this per community centralisation is perhaps not the worst form to take, especially if members of the community are able to hold this third-party accountable and prevent those running the services from becoming self interested. With any service meant for community consumption such as policing and public security where individuals are granted special responsibilities involving trust, we see a breakdown in the system if it is not kept 100% accountable. This accountability requires active community participation itself and itself has a stability threshold. If a community bank is depended on by too many people, the responsibilities of those managing the bank start to become more abstract as the relationships with their members is spread out and ultimately lost. It is not to say that a community bank managed by a group of well intentioned and well organised individuals cannot sustain itself to scale, but this is not the norm and we should not assume this can be standard. Any altruist, scalable bank must take on additional responsibility of preventing malevolent actors from infiltrating as those who crave power for power’s sake typically have more incentive to obtain those positions of control.

This problem is in fact quite similar to the centralisation of the Internet in many ways and we can see this tendency towards third-party responsibility with solutions for security, identity, storage and many other services. The quick rate of change in the Internet’s evolution (mostly due to it’s historically open and free nature) allows ideas, experiments and development to iterate quickly and find better solutions. On the other hand, we are just now seeing some substantial research and testing for better global economic solutions (mostly due to it’s historically closed and controlled nature) such as cryptocurrencies which in large part is driven by the open and free nature of the Internet

These parallels between systems in how they scale within human society is not to be overlooked. In fact, if you were to explore systems based on other biological organisms in nature, you might find similar thresholds and patterns in scaling. Nature is rarely ever successful at widespread scaling, but in some systems, there are interesting properties with resilience and long-term efficiencies which may also develop in human-based networks and technologies. As you may know, Argentine ants have an interesting scaling that the SAFE network is in part inspired by in our mission for a sustainable, resilient, decentralized Internet. However, there are further considerations related to properly scaling the Internet which I will explore in the next part of this series.  The ownership of property and how it compares to intellectual property is a key element especially as we dive deeper towards the relevance to free software and digital content.