SAFE network

SAFE Network: Mobile Tech Preview

As MaidSafe continues its progression in the role out of the network, we have hit another important milestone that we would like to share. We now have SAFE mobile applications, running on Android and iOS, and today we have released some demonstration apps to showcase this progress. iOS requires some code updates and app certification to be ready for user testing and is currently limited to testing via the iOS simulator.

It is important to note that these apps should be considered as a technology preview, a very useful proof point for us that the SAFE Network accommodates mobile devices. This is the culmination of several changes that have been made over the past 9 months, including a new data type and a new access mechanism, in the form of the ‘Authenticator’. In time we will provide mobile developers with the tools and documentation that they would need and want to start developing SAFE mobile apps. In the meantime, please refer to today’s dev update for instructions and requirements for running these apps for yourself.

The Authenticator
The first of these applications is the Authenticator. This is the focus of the imminent alpha 2 release and the mechanism by which users securely access the network, while maintaining control of each SAFE applications access to their data.


SAFE Messages
The second application is a stripped back and simple mail app. It provides end to end encrypted messaging that uses the public key of the recipient to encrypt the message, ensuring that only the recipient can read its contents.

Alpha 2
The mobile tech preview comes at an exciting time in MaidSafe’s development roadmap, a welcome lead into alpha 2 which we will be releasing next week, on Thursday the 21st of September. This latest alpha will incorporate the Authenticator, a new SAFE Network access mechanism that is network enforced, and as you can see from today’s announcement mobile friendly. We look forward to providing more detail next week.

At MaidSafe, our development approach has been different to many other projects in the space. We have focussed on the hard problems first. This is not a criticism, just recognition of a different approach. Rather than putting out a network that gives little thought to the security of the data on it, or ignores the issue of how it will scale to millions of users, we have prioritised finding solutions to these big questions up front. This may create the appearance that we are moving slower than many of the other larger infrastructure projects in this sector, but in tackling the more challenging issues from the outset, in a methodical and transparent way, we anticipate being well placed to provide the decentralised infrastructure of the future.


Since the last blog update in May we have published new test networks that are helping us to evaluate much of our recent development work. If you recall, we made several changes to be able to accommodate mobile devices as network clients. These changes included the addition of the Authenticator (a secure access mechanism that is bundled with the SAFE browser) and a new network data type – mutable data – as well as a significant number of changes within the APIs.

Test 17
The current network, test 17, was introduced initially to a small number of forum users, but has since been scaled out in order to accommodate more users. Updated mid July (13th) and re released based on initial feedback (and barring a few minor bugs), test 17 has behaved as anticipated and we’re very encouraged by its stability. We intend to keep a test network in place from now on to enable app developers to develop against this network, rather than resorting to running apps locally.

Forum member Zoki has put together a couple of videos which he has posted on YouTube that demonstrate the use of the Authenticator and the Web Hosting Manager, as well as viewing a few SAFE websites along the way. The Authenticator enables users to create their own network credentials without the involvement of third parties and provides access to the test network.

DNS, but not as we know it
The Web Hosting Manager facilitates users creating their own public ID and service that they can then upload content to and publish for other network users to view. This feature demonstrates a differing approach to the Domain Name Service (DNS) used on the existing Internet that is managed by several DNS providers, such as Dyn and Verisign. Within the SAFE Network, this Decentralised Naming Service, enables web site owners to create their own domain without the involvement and cost of third parties and enables instant publishing of data.

If you are a SAFE Network forum member of trust level 1 and higher, you will be able to participate in this test and play about with these demo apps for yourself, and the following thread contains links to many of the websites published by other forum members.

SAFE email client
The second video produced by Zoki demonstrates the Email application, which is an end to end encrypted messaging app that uses the public key of the recipient to encrypt the message, ensuring that only the recipient can read its contents. Currently using nodes managed by MaidSafe in test 17, SAFE email in future alphas will be decentralised, ensuring that no central entity can view or control access to your communications.

It is important to note that these example applications are intended as tutorials which demonstrate the features of the network while guiding application developers to create more fully featured and polished apps with the SAFE Browser DOM APIs.

Data Chains
What we currently have in test 17 is likely to not have too many more changes before we move to alpha 2. As mentioned above, we are very encouraged with the stability of this network. In tandem with much of the work above the team has been working on a feature called Data Chains. You may remember from our previous blog post that this is a feature we anticipate will ultimately enable the secure republishing of data should the network ever lose power, as well as providing validation that data has been stored on the network. The team has considered multiple implementation options, and subject to simulation tests, has agreed an approach and have started the implementation. Testing of this new Routing design is likely to be incorporated within alpha 3. For plans beyond this, please refer to our roadmap.

For those who regularly go on our forums you will notice an increasing number of new team members. Recruitment continues to receive significant focus as we scale the team to increase the speed and quality of the network roll out while also spreading the load more evenly across the team. As such, we have brought on board some operations staff at our HQ in Scotland and continue to grow the team overseas, who are currently based anywhere from Australia to Argentina!

We now have 23 people working with the company, but we are still looking for Network Engineers. If you are proficient in Rust, or have experience with C or C++ and have experience within P2P architectures, please visit our careers page for more details on how to apply.

Well, that concludes this update, we really appreciate the continued support of everyone in the SAFE community (investors, testers, forum members). As you know we are doing everything possible to expedite the network rollout and giving you the privacy, security and freedom you all so richly deserve.

Power of the Crowd Series: Number Four

Image: Desi Mendoza
It has been a while since we shared our initial thinking around the challenges facing the internet and we have had some excellent reaction to the discussion so far. We very much appreciate the feedback and it is pleasing to see this is a timely discussion. Everyone from Sir Tim Berners-Lee, Wired and the Economist are all debating the impact and consequences that technology, particularly the Internet, is having on society. There have been many different solutions put forward as the best answer to deep rooted issues such as poverty, inequality and social mobility. We agree the current social and economic model, underpinned by the internet and other technologies, has not benefited everyone equally, but we are not convinced by the proposed solutions, such as universal basic income. Therefore, it is time to put forward our suggested response and open it up for further debate and improvement. As technologists we cannot solve all the complexities, but there are ways to use technology, especially an improved internet, to deliver a fairer, safer and inclusive society.

So how can improvements to our internet infrastructure benefit everyone?
At MaidSafe we believe the solution is community-led, hence why we talk about the Power of the Crowd; but for the crowd to be successful control has to shift from a handful of organisations to individual users and we have to develop an open, incentive-based economic model that rewards participation in a community. Technology will continue to play a sophisticated role, but it should be the enabler, not the source of problems and inequality. Above all those that develop the technology should not be allowed to retain an unhealthy level of control.

We believe this will go a long way to addressing the political/philosophical, rational and emotional debates outlined below.

The Political and Philosophical Challenge
No one has worked out how global societies should move forward in their relationship with technology. A lack of consensus means thinking is being informed by both rational and irrational ideas and uncertainty is becoming the only uncomfortable constant. As technologists we are excited by this uncertainty, but as humans we have instinctive responses to fear and threats, which should not be overlooked. While some describe a future that includes flying cars, autonomous vehicles and neural lace that blur the lines between robots and humans, others see no clear path forwards for themselves and their families. These people are what Guy Standing describes as the Precariat – a new class that has evolved as a result of the rapid advances in technology. This community has no job security, is burdened with debt and living in constant fear of social exclusion. They see robots and artificial intelligence as a threat. They look at the dominance of Google, Facebook and Amazon as unfair. Add to this the growing threat of cybercrime and desire for governments to use mass surveillance in the name of national security and it is easy to see why there is growing frustration. Inherent rights to self-determination, employment, privacy and security are being denied or stripped away.

The response of governments, policy makers and regulators are stuck in the 20th Century at best. They believe mass surveillance powers are the only way to combat cybercrime and terrorism, yet there is no evidence this approach works. To address the rapid address of technology they set up innovation funds to foster economic opportunities for future generations and commission academic bodies to analyse the social impact. Yet they skirt nervously around the big ugly question of control and ownership, particularly that exerted by the internet technology vendors. Jonathan Taplin argues that breaking up Google would lead to the same type of innovation explosion that accompanied the break-up of AT&T. Resorting to regulators always makes markets uneasy but you know there is a problem when even free market advocates like the Economist suggest regulation is required!

Rightly the Economist has identified that it is not technology that defines our current era. It is data and that ceding control of all our data to a few vendors is a bad idea. Furthermore the current regulatory model is not fit-for-purpose as it has failed to keep up with the pace of technological change. The answer is simple. We must switch control back to the user and give the individual the rights, education and skills to make informed decisions about how and when they engage with technology, and those providing products or services via the internet.

The Rational Problem
Perhaps where governments can effectively support this switch in control is to introduce regulation that changes the dynamics of the current internet-led economic model. The most radical answer would be a disbandment of existing intellectual property laws, which the likes of Guy Standing believe concentrate control in the hands of the few. Allowing a small number of companies to hold patents on crucial technologies enables them to defeat competition and maintain regular income flows. This is the key rational economic challenge to overcome. We have to ensure technology does not enhance disparity between the ‘haves and have nots’ but closes the gap.

At MaidSafe, we are sceptical regulation alone can address the economic disparity question. One idea would be that an international governing body oversees the internet and levies a tariff on internet companies, dividing the proceeds between countries to support the expansion of infrastructure and improvement of technical skills. This is unrealistic. Anyone observing the World Trade Organisation attempting to secure agreement on universal trade shows how hard countries find it to set aside national interests.

Another more radical approach is demanding greater adherence from internet companies to the principles of open source and the open web; in particular rebalancing what is considered intellectual property (IP), in order to improve accessibility. It is one of the main reasons why MaidSafe has made the underlying SAFE Network code available under the GPL license and transferred ownership of the underlying and defensive IP to the MaidSafe Foundation, a Scottish charity focussed on fostering education and innovation. Both Jonathan Taplin and Guy Standing talk about the internet companies being the landlords taking rent from those using their IP. We are not suggesting all protection for innovators be removed, but there is an argument that economically we have become over-reliant on patents and should reduce that dependency.

By encouraging the open sourcing of more critical infrastructure technologies it creates the potential for a more even playing field as a start point for those who want access to the internet. Of course the big technology companies will say their business models fundamentally rely on revenue streams from existing products to fund the next generation of products, but they appear to have forgotten that a lot of today’s products and services started out as publicly funded research projects. If commercial companies are going to secure a long-term revenue stream from rentable models then surely they must be encouraged to take a different approach to patents and IP.

More importantly, though it would show willingness from industry to address the even bigger issue of inclusion; despite technologists heralding ever growing numbers of people accessing the internet there are still far too many cut off from its opportunities. Ultimately, this is one issue the policy makers and governments have to address, but adopting a more open source approach can go some way to enabling greater access.
Image Slava Bowman

The Future is a Community-Led Movement
However, we believe the above options do not go far enough. Internet companies, particularly those obliged to report to Wall Street, will always struggle to balance commercial pressures against social good. That is why we have significant doubts about universal basic income, which the technology industry appears to be backing over-enthusiastically. On one level it appears arrogant, suggesting that ‘poor’ people should rely on a form of welfare system to make up for a lack of work. Perhaps we should all be grateful that the top 1% dole out hand outs, but the vast majority of people we know would be offended if their family and future generations had to rely on UBI to get by. It lacks innovative thinking – yes technology will take away jobs, but we also believe it will create new ones and new economic models. Frankly, UBI is not radical enough, borne of traditional approaches to the welfare state.

Our proposal is the network becomes a source of income and economic opportunity based on contribution and participation. Fundamentally it becomes a reward system, where individuals and communities can contribute and feel a sense of accomplishment based on their level of participation. Above all this should be a bottom up approach, led today by communities of like-minded individuals. Network technologies and reward mechanisms are being developed to empower communities to take control of their identities and be more fairly rewarded. This will mean we are less reliant on the dominant internet companies and not waiting for government policy to catch up.

It allows commercial companies still to profit, but it also means users and content producers get to share the spoils. We should be offering users a reward in return for access to their data and we should find innovative ways for users to monetise their computing resources. More and more households and communities will have sophisticated computing equipment which could be a source of capacity that could provide revenue streams when individuals are not working. For example, at MaidSafe we are developing Safecoin, which provides a fair reward and payment mechanism for access to data. Combined with the ability of the SAFE Network to identify the owner of each chunk of data it will be a better way for content producers (artists, bloggers and musicians alike) to receive payment, as well as paying users for access to their spare computing capacity.

The Emotional Challenge
We believe incentivising participation is crucial in addressing the final and most divisive challenge – the ambiguity that the rise of technology has created for many people. Understandably it has led many to react instinctively and angrily to the control of the internet oligarchy. People are worried machines will lead to widespread redundancies and ultimately long-term unemployment with no positive alternatives explained. The only way to address these concerns, which can become very emotive, is to create a community led response. Working together communities should be able to define opportunities, whether they are economic or social. The key is enablement and encouraging groups to work together, which again comes back to rewards and incentives. We already see a lot of this collaborative working in the SAFE Network Forum, which is moderated by members of the community, and MaidSafe is only a contributor.

Using incentives and open source technology will make participation both accessible and beneficial. It will allow groups to work through challenges and create very local solutions. For example, imagine a community-led computing facility that generated income to support the group by offering capacity to the SAFE Network. That income could be shared among the group or used in exchange for products and services with other communities via the platform.

Clearly it is hard to envisage this reality, while the SAFE Network is still in development, but the growth of the SAFE Network Forum emphasises the value of a community-led approach. There is a role for government in supporting these communities, making people aware of them and educating them on ways to participate. This is a central element of the inclusion issue. If governments and education institutions can provide the training and support to help citizens to understand the opportunities this model offers it will empower communities to find their own answers.

However, we should not wait for policy makers to catch up. We have left it to the politicians for too long to come up with the answers and they have failed. We will have far greater influence over our relationship with technology and how it affects our lives if we build a movement that mobilises around our needs. The vision is not one huge amorphous online community, but many different ones focused around common interests and needs, benefiting from open access, being rewarded for participation and being allowed far greater control of our personal data.

One final note to add. While this may seem like a huge and almost unmanageable challenge this is no different to any other stage in history where the pace of technological change has forced a rethink of our approach to society and economics. Take this example:

“The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity, so that solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury.”

This was written in 1890 by Samuel D. Warren and Louis D. Brandeis in the Harvard Law Review. Similar to today’s technology, advances in photography in the late 19th century were seen as seen as hugely disruptive to society. We survived that inflection point. We got some things right and some things wrong. I’m sure with a willingness to take some brave decisions and a community-led approach we will get through this next stage in our relationship with technology.

SAFE Network Development Summary – May 2017

We’ve had quite few requests on social media and on email these past few days requesting updates on development progress. These messages serve to remind us that not everyone has the time or the inclination to read the weekly development updates which we post each Thursday onto the forum. So many projects, so little time! So the intention with this post is to provide a summary of the most recent events and our hopes and expectations moving forward.

Image: Richard Tilney Bassett

The best place to start is our development roadmap, which we updated and published late last week. This web page tries to encapsulate all the complexities of development over time on 1 page so it’s pretty high level, but it is this snapshot view that most people seem to appreciate. You will notice that the roadmap outlines the major aspects of development and a rough indication of the order in which we anticipate tackling them.

You will also notice that we haven’t included timescales. In the past we have provided timescales for ‘launch’ of the network. These have always been wrong despite our best efforts. We have found it difficult to estimate timescales since, we believe, so much of what we have been working on is brand new technology, sometimes completely bespoke, and other times building on the work of other projects. Testing is also interesting, it really helps us understand more about how the network fits together and how it is utilised by our community, but invariably leads to more tweaking and testing with previously unplanned and unknown rework and test durations.

We believe that publishing release dates that have a high degree of uncertainty attached is not helpful to anyone and can cause more frustration than not publishing them at all. Network related development is typically where the biggest black holes are and as we get into incremental development client-side, we anticipate time scales will become more predictable.

Stable decentralised network
In late March we released test 15, a network that incorporated both data centre resource as well as enabling user run vaults. Within this release, users were also able to run the SAFE Browser, Launcher and demo app, which continue to facilitate the storage of private and public data, as well as create public ID’s and publish SAFE websites.

After 3 days of running a stable network without any lost data we realised we had reached an important milestone. While we had done this extensively in private tests, it was fantastic to see it running publicly and see the community reaction to it. Of course, life has a sense of humour and shortly after it became apparent that a script had been written that created fake accounts and filled the relatively small network with data, stopping the creation of new accounts or the uploading of new data. This was really helpful to us as it enabled us to find out what happens to the network when it reaches capacity in a real world setting. The fact that it behaved as expected was reassuring, although we’d be lying if didn’t admit to finding the spam attack a little frustrating. This is of course something that the integration of safecoin would stop, as the requirement to ‘pay’ to store data will make the attack expensive, while the incentive of safecoin to farmers would lead to a significantly bigger network.

What now?
Looking forward we are currently focussed in 3 main areas:

  • Catering for mobile devices.
  • Enabling greater user participation.
  • Improving the resilience and robustness of the network.

The patience app developers have shown to this point is soon to be rewarded. The process of converting our APIs away from a REST paradigm to SDKs was essential to cater for mobile devices, as the requirement for REST APIs to maintain state would not have worked with mobile devices that disconnect and reconnect regularly. Users of the SAFE Network will gain access through the Authenticator, a secure gateway that protects user credentials from the application itself. The Authenticator is currently being bundled with the SAFE browser and will enable users to securely authenticate themselves onto the network, or enable them to browse publicly available data without logging in.

To implement Authenticator the team required to add a new data type, mutable data. The new data type improves the network efficiency, saves bandwidth, and provides the granular access control required by mobile platforms.

With mobile devices being so ubiquitous throughout the world, enabling mobile client access to the network, mutable data has been receiving significant focus. From a resource provision perspective, both alpha and beta versions of the network will require laptop and desktop and in time single board computers to earn safecoin when it is released. In time, we will look at enabling mobile devices being able to farm for safecoins when plugged into a power outlet and when in range of WiFi, however, as we will detail below this is not a priority for now.

More alphas
Some of the example applications that have been created are currently being ported to suit the new data type and to be compatible with the new APIs. The team are updating the documentation and are testing the applications using a mock network, and they seem to be far more stable than previous iterations which looks positive. We anticipate alpha 2 will encompass the new Mutable Data type and Authenticator, SAFE Browser DOM APIs and Node.js SDK, along with example apps, tutorials and documentation.

Image: Clint Adair

Alpha 3 will see our focus shift onto enabling a greater number of users to run Vaults from home by integrating uTP. Presently users must TCP port forward, or enable UPnP on their routers which requires a little set up in some cases. Adding uTP support will make for a more seamless process for many while making the network accessible to more users. uTP is used in some BitTorrent protocols and when implemented effectively helps to mitigate poor latency and facilitate the reliable and ordered delivery of data packets.

During this phase we will also integrate node ageing, a feature that make the network more resilient to consensus group attacks. The team will also implement the first part of data chains, a feature that has been planned for a while which it is anticipated will ultimately enable the secure republish of data should the network ever lose power, and to provide validation that data has been stored on the network.

Looking ahead
Beyond alpha 3 we will focus on:

  • Data Chains, part 2.
  • Data republish and network restarts.
  • A security audit of the network
  • Test safecoin
  • Real-time network upgrades
  • Network validated upgrades

As has been the case to this point we will continue to release multiple test nets regularly between each alpha network to prove the technology in a public setting, and to mitigate against the code regressing.

We continue to be grateful to the huge support of the people that take the time to run these networks and report back, you all know who you are!

Developer Case Study – Dsensor

Decentralized Mapping Protocol Project – Dsensor

Continuing our series of case studies highlighting early stage application development on the SAFE (Secure Access For Everyone) Network, Dsensor is being developed by James Littlejohn. James explored various platforms to store and protect the data he would be collecting and decided to use the SAFE Network, because it reflected his belief that the network should not be driven by economics, but be focused first and foremost on the data.

MaidSafe’s fully secure, decentralised approach supported James’ view that knowledge or data should be in the complete control of user. While it is early days, Dsensor’s use of the SAFE Network APIs in its proof of concept form shows its potential as a viable platform for the management of data. James was also attracted to the SAFE Network, because of its strong encryption, and its ability to break data into chunks before scattering it around the decentralised network of nodes. This ensures the highest possible security and privacy for users when combined with the decentralised architecture, which avoids offering hackers central points of attack on a network, as we experience in today’s centralised, server-based model.

Being open source and supported by a strong community in the SAFE Network forum also means James has ready access to experts and potential partners, who can help to build out the application and trouble-shoot any technical questions. In the future James may also explore using safecoin to incentivise participation on Dsensor.

The Problem with Science

James Littlejohn has been involved in entrepreneurial projects since the dot com boom and while investigating opportunities around text mining identified an opportunity for lifestyle linking analytics, particularly in the area of wearable tech. In the course of his evaluation he recognised a broader application to data mining and analysis in the field of scientific and academic research. James assessed a number of drivers, including emerging technologies and changing economic conditions, which were beginning to have an effect on the way research was conducted.

Firstly, walled garden applications such as Facebook and wearable technologies were becoming more prevalent, and while they were a rich source of data on human activity, access to that information was restricted. At a time when the internet is supposed to be democratising many aspects of work and social life this is endangering an important source of information on lifestyle and health patterns, which could benefit societies around the world.

Secondly, the sustained economic impact of the financial crisis was creating significant pressure on public funding for research at a time when it was needed more than ever. Technology and the availability of large amounts of data is leading to opportunities for breakthroughs in a wide variety of academic and research fields. If the funding is not available via traditional public sources then there is an urgent to find new forms of investment. The rise of alternative cryptocurrencies could potentially address this point, offering a new, fairer way to incentivise and reward individuals for participating in research projects. For example, James envisages a scenario where the grant funder might ‘tokenise’ a percentage of their funding money and issue it via a science blockchain (like Dsensor). This would help to ensure the funding could be traced directly ensuring good governance of scientific research projects and fairer access to resources.

The final driver for a new model reflects an on-going debate about the model of peer-reviewed scientific research. For a number of years there has been a recognition of some fundamental weaknesses in the model in areas such as the replicability of research. In a poll conducted by Nature in May 2016 more than 70% of researchers admitted they had tried and failed to reproduce the experiments of other scientists and more than 50% failed to reproduce their own experiments. Of course this is in part due to the nature of frontier scientific research, which is reliant on trial and error, but there are clearly inefficiencies in the process.

Furthermore, there are questions about efficiency of current research models – in 2009 Chalmers and Glaziou identified some key sources of avoidable waste in biomedical research. They estimated that the cumulative effect was that about 85% of research investment – equating to about $200 billion of the investment in 2010 – is wasted. A blockchain provides a potential solution to this reproducibility crisis as Dr. Sönke Bartling and Benedikt Fecher outline in their paper, “Blockchain for science and knowledge creation.” Although scientific research should be delivered at arm’s length from the individual contributors it is ultimately reliant on individual scientists to gather and interpret data without bias. It is also often reliant on finite data sets, controlled samples or clinical trials, thus limiting the ability to cross reference the findings against other data sources.

Given the availability of data via the internet and the rise of automation technologies, such as machine learning, James believes that if individuals have control of their information they can decide to contribute their information to research projects without the interference of third parties such as academics or technology providers. Using automation scientists, academics – and more importantly citizen scientists – can draw data from anywhere in the world beyond the confines of a specific controlled sample and review independently to provide a data driven outcome.

Building A Blockchain for Science Research – A Truth Engine for Mankind

James’ investigation of text mining approaches led him to peer to peer models, which were enabling the owners of data to take control of how and with whom their information was shared.  

It led to the development of (Decentralized Mapping Protocol), a peer to peer network for science knowledge to be investigated, discovered and shared. It has been based on the principle of science “SenseMaking” and it is designed to evolve peer review to a computational consensus model.  Using Dsensor if a scientist creates a thesis and wants to test it the scientist enters the hypothesis in computational form (called a Dmap in Dsensor speak) . The Mapping protocol then automates the testing of the science, starting by trawling the Dsensor network for relevant data from other peers. That data is then sampled and ‘scored’ based on its prediction power to verify or challenge the thesis until a computation consensus is established.  Science attaining this status then becomes ‘computationally active’ in the network meaning any peer has the ability to tap into the collective knowledge and feed in their own unique sensor data get the insights from the science working for them.

James has the ambitious goal to become a “truth engine for mankind” ensuring science is based on openness, transparency and reproducible results, essentially checking the accuracy of peer review.  Dsensor intends to deliver this outcome by building a network of trustless peers, which has inherit vast complexity making it economically and technically very costly and difficult to corrupt.  Dsensor, currently at proof of concept stage utilises  the Ethereum blockchain, using its cryptography and a variant of the proof of work process to create a technological and mathematical state where even with colluding computers it is impossible to game the system.   Rather than creating complexity using a proof of work Dsensor creates uncertainty using random sampling, in particular the selection of peers from the network and data sampling techniques.  The data sampling techniques are selected by each autonomous peer and the network sampling is a property of the Protocol for selecting peers from a Distributed Hash Table. In theory once the network gains a certain size the economic cost of gaming the network with false sensor data to support a fraudulent scientific computation will become extremely costly.

Additional safeguards include the role of reproducibility in science.  This creates an immutable audit trail or “mapping accounting” entries that support the most truthful science available to the network.  These networks are called GaiaBlocks and are open to be challenged by any peer on the network.  Scoring of the science also provides a rating model for scientific research and individual peers on the network.  Peers with poor outcomes will be demoted in favour of more highly rated scientific computations.


Developer Case Study: Project Decorum

During the course of 2016, MaidSafe have been privy to a number of projects that are building on top of the SAFE Network. One such project is Decorum.

What is it?

Project Decorum is currently a research-led project, run by Harmen Klink, a computer science undergraduate at the HU University of Applied Sciences Utrecht in the Netherlands.  He wants to build a social media platform, which gives the user greater control of his or her data and therefore enhanced privacy – rather than today’s model which is centralised around a few service providers.

Project Decorum is currently a proof-of-concept, which Harmen has designed in order to drive a successful crowdsale, which raised over €400,000.  He is aiming to use this investment to further develop the application, aspiring to create a hybrid of the best features of existing major applications, such as Facebook, Reddit and Twitter.  

How does it work?

The core protocol of Project Decorum is a substitute for the missing central coordinator, because the SAFE Network has been designed on the principle of a “serverless” architecture.  It consists of a set of rules that describe where and how conversational data should be uploaded to the SAFE Network. These rules predict where the replies to a particular message on the SAFE Network might end up, no matter where the original is located. This means that all applications and SAFE websites that use this protocol will be compatible with each other, making communication simpler.

On the data level all information is visible and the protocol will organise conversations in a tree structure, where every node of the tree represents a message from a user. Replies to earlier messages will create new branches. This tree structure lends itself well to be represented in a “threaded” format, which is done by many well-known forums and comment plugins. Users will build a user interface to decide what data they see and can create a new root to start a new tree for a new conversation. This can be used to create a forum, a comment section on a blog, a group chatbox, and so on.

In Project Decorum users will own their data and everyone is their own moderator through the use of personal ignore lists. In principle, particular posts or users can be put on such an ignore list. It is also possible to subscribe to one or more ignore lists run by other people. This allows for dedicated and widely accepted moderators to naturally rise up in their respective communities. Active people with sound judgement will be subscribed to as moderators by groups. These people can also collaborate to form a moderator team, and possibly accept donations or even charge for their moderation services. Multiple teams with different rules can be active in the same community if there is demand.

Why is Project Decorum working with MaidSafe?

Harmen chose the SAFE Network for his project for several reasons.  He believes the privacy and security of the platform should be the pre-requisite for any Internet application.  Furthermore the decentralised model offers great scalability and he has found it hard to overload the system.  Additionally, SAFEcoin is a great feature, because of the way it is integrated into the network and offers instant rewards.  This will help to sustain engagement with the platform, as social payments are an important feature increasingly expected by users.  It also offers developers the flexibility to expand tokenisation of other assets to create a crypto-currency to represent all kinds of assets.  

What’s next for Project Decorum?

The next steps for Project Decorum include working on designs to make them more tangible and figuring out the business model.  As APIs for the SAFE Network become available and more stable Harmen will continue development on the protocol.  MaidSafe hope that features such as the automatic reward mechanism for participants will enable Harmen to further develop the usage model for Project Decorum.

Harmen Klink, Founder, Project Decorum

“I believe having access to multiple identities is an important benefit of the SAFE Network, because it reflects the varied identities and roles we play in our personal and work lives. The network of identities forms a web of trust that can be used to distinguish legitimate users from abusive bots. When a real name is coupled to an identity, the strength of the web of trust is also used to show others the likelihood that those two truly belong together. This protects users from becoming victims from impersonification and identity theft.”

The SAFE Network Release Cycle

As you may have gathered from the even greater amount of activity on GitHub (I didn’t think it was possible either) the core of the SAFE Network has been getting tested both internally and externally as we get ever closer to a stable decentralised network.  While details about the team’s development progress continue to flow via the forum, the purpose of this post is to establish the main phases of the impending and iterative release process. There are:

  • Alpha – the core network and fundamental functionality is provided via an initial implementation, this is the first public testing phase.
  • Beta – alpha implementations are reiterated and improved, based on user feedback.
  • Release candidate – the fundamental feature set is stabilised to provide greater security, resilience and efficiency.
  • Release – privacy, security, freedom!

The speed at which MaidSafe will iterate through the alpha testing phase is unknown and will be dependent upon how well the network performs at this stage. However, it is anticipated that having the core network in place will make it significantly easier to test the addition of new features than ever before. Testing against mock networks is only useful up to a point!

There will be several alpha releases, which will commence in simple numerical order, each denoting an incremental improvement on the previous version. For example, as per the roadmap, alpha 1 will come with: public ID management, public and private data storage, vault configuration and desktop installers (64 bit Windows, Mac and Linux). The second alpha iteration will include additional features and will be called alpha 2, and so on.

SAFE Network Fundamentals

The fundamental features, beyond the alpha 1 release, have been defined as:

  • Contact management
  • Messaging
  • Test safecoin
  • Vaults running on ARM architectures
  • Launcher UI
  • Safecoin wallet

The alpha release will gradually implement this functionality in an iterative cycle and provide the features highlighted above. However, this will be the first iteration of these features and development on them will continue until the engineering team are satisfied that the implementation provides the desired functionality. At this point, the network will transition to beta. When in beta, these features will become more elegant, efficient and secure. The release candidate will see the features frozen and further stabilised prior to full release at which point safecoin will go live.

In tandem with this release cycle, both users and developers can expect the ongoing release of APIs that reflect access to ever increasing network functionality, as well as example applications that showcase features of the network to end users and also act as tutorials to developers.

Out of Beta and Moving Forward

Beyond release MaidSafe, either alone or working in partnership with other developers, will start to create some of features below that will offer both developers and end users access to some exciting new tools, such as:

  • Pay the producer
  • Smart contracts
  • Autonomous updates
  • Computation handling

We will provide you with more details on each release as it approaches and hopefully this post has been useful in providing more detail around our planned release cycle.