SAFE network

Autonomous Data Networks and Why The World Needs Them

Photo Jingyi Wang

At MaidSafe we talk about the SAFE Network being ‘autonomous’, but what does that really mean? The phrase is something that we are becoming more familiar with, as we hear talk of autonomous vehicles and autonomous robots; as such we probably have a grasp of the underlying concept that autonomous machines do things for themselves. But how does this relate to data and why should we even care?

In simple terms it defines a network that manages all our data and communications without any human intervention and without intermediaries. In an autonomous data network humans take on a new role, we become the definer of rules and protocols that instruct the network on how to manage our data.

The SAFE Network

In practical terms, an autonomous data network is one that configures itself. All data on the network is automatically split into chunks and encrypted (utilising self-encryption) before being stored at random locations selected by the network. Resources are not added to it by an IT administrator; instead nodes join the network anonymously, and are split into small groups at random without any central authority. Each node performs a number of different and clearly defined tasks. These groups, we call them close groups, change as nodes disconnect from and reconnect to the network. They work together making decisions (such as where to store data, who has authority to access data…etc…) on behalf of the network based on the messages they receive. The more technically minded can read in depth about that here.

The network also optimises itself by creating more copies of popular data increasing its availability in order that data requests are served more quickly. This feature also enables SAFE websites to actually speed up as they get more visitors. This is very much contrary to the status quo where we have become accustomed to websites slowing down, or even crashing in severe circumstances under the weight of user requests. Should the network split for any reason, for example through loss of power, it will merge as power is restored, and it will correct faults, such as detecting corrupt data chunks and automatically replacing them with good copies as a result of the networks ongoing data integrity checks.

Remove the middlemen

This design sounds complex, and at the implementation level it is, the dark bags under the eyes of our engineers are testament to that fact, but at a high level it is simple. An approach inspired by the humble ant whose millions of years of evolution influenced the network’s design. Ant colonies exhibit complex and highly organised behaviour without a central authority based on a simple rule set whereby each ant fulfils different duties based on the needs of the colony. Similarly, nodes (computers) on the SAFE Network function in a similar manner where network nodes perform different functions based on the types of messages they receive.

The ant colony shows us that this self managing and self organising behaviour is possible on a massive scale. But why should we try and emulate ants and remove central authorities from the management of our data? Surely for something as important as this, humans are required to oversee operations?

Photo David Higgins

Well, for a start humans are, well human. At our best we are creative, brilliant and passionate, but at our worst we get tired, emotional and we make mistakes. Many data breaches are caused by human error and attackers rely on human interaction to carry out attacks. Researchers at security company Rapid7 found a substantial decline in security alerts on weekends and public holidays which they attribute to less employees interacting with malicious emails, attachments, links and websites. This is in part a result of a lack of training and awareness, only 20% of companies provide cyber security training to their staff, and only 33% have formal policies in place to guide employees.

Human error has also played a significant part in problems with Silicon Valley’s best known companies. In 2011, developers at cloud storage provider Dropbox introduced a bug that left their 25 million client accounts unprotected for 4 hours. Dropbox were subsequently alerted to the problem by an external security researcher and fixed the authentication issue.

Late last year Twitter deleted the account of their CEO Jack Dorsey who lost 700,000 followers in the process citing an ‘internal mistake. Around the same time Facebook deleted posts addressing fake news by their CEO Mark Zuckerberg in error.

While the irony of these incidents can be amusing, they do expose a more serious issue. Not only are humans prone to mistakes, it also highlights that we are afforded access to our accounts and our data by the service providers. We do not really own our information in the true sense of the word. Access to our own data can be removed at any time by the providers either mistakenly or at the request of others.

Physical Security

Physical security plays a hugely important part in all of this. This is one of the major features that an autonomous data networks provides. In data terms, physical security is where the data cannot be: deleted, changed, corrupted, and/or accessed without your (the data owner’s) consent. Only by removing humans from the management of our data can physical security be provided, and is only possible when the storage locations are unknown to anyone but the network, and the user cannot be identified.

Any service where data is stored on servers, federated servers, owned storage locations, or on identifiable nodes, cannot ensure the security of data and brings us no closer to real unfettered ownership of our data. This also includes blockchain based solutions.

The SAFE Network provides physical security by ensuring that only the network knows where the data is and only the user can access it. Even MaidSafe staff don’t know who is on the network, where they are based, what has been stored and where the data is located. SAFE users make a deal with the network and only the data owner can delete or modify the original piece of data with the network verifying who has the right to access each piece of data.

Autonomous things are already starting to have a huge benefit across a number of industries and we are just scratching the surface in finding out how they can positively impact upon our relationship with our data. Rather than making data more secure, the human element unfortunately has the opposite effect and can lead to data loss, theft, inaccessibility and a fundamental lack of ownership.

SAFE Network Alpha 2 (The Authenticator) Launch Announcement

Today we are excited to be releasing the next major milestone in the roll out of the SAFE Network, Alpha 2 – The Authenticator.

This latest step is a culmination of a significant amount of hard work from the MaidSafe team, and much testing from the superb SAFE Network community. The result is a new network access control mechanism, the Authenticator, which enables users to securely authenticate themselves onto the SAFE Network, while protecting their network login credentials from apps.

Bundled with the SAFE Browser, the Authenticator supports Windows, OSX and Linux, and as many will have seen from last week’s update, now also supports Android, with iOS support in progress. As we mentioned last week, the intention with the mobile example apps is to confirm mobile platform support, we will provide the docs, tutorials and APIs that mobile app developers would expect in due course.

The desktop version of the SAFE Browser will come with two tutorial applications. The Web Hosting Manager and SAFE Mail. These apps will be familiar to those who have taken part in recent test networks. The Web Hosting Manager allows users to create their own public ID, and upload and publish content instantly. SAFE Mail provides end to end encrypted email using the public key of the recipient to encrypt the message.

The two Android applications provided with this release are the Authenticator, and SAFE Messages, an example application that demonstrates end to end encrypted mobile email.

As has been the case with more recent test networks you will be required to obtain an invite code in order to participate in this Alpha. In order to do this you will need to have a basic user account on the SAFE Network forum. For those new to the SAFE Network and the forum, the following link tells you how to get this. This measure is in place to prevent the network being flooded with data prior to the network being fully featured.  

As you may have noticed, the installers for today’s release are on our new web page (we hope you like it) and tonights forum post contains a full breakdown of information relating to this release. We hope you enjoy using Alpha 2 as much as we enjoyed creating it!

SAFE Network: Mobile Tech Preview

As MaidSafe continues its progression in the role out of the network, we have hit another important milestone that we would like to share. We now have SAFE mobile applications, running on Android and iOS, and today we have released some demonstration apps to showcase this progress. iOS requires some code updates and app certification to be ready for user testing and is currently limited to testing via the iOS simulator.

It is important to note that these apps should be considered as a technology preview, a very useful proof point for us that the SAFE Network accommodates mobile devices. This is the culmination of several changes that have been made over the past 9 months, including a new data type and a new access mechanism, in the form of the ‘Authenticator’. In time we will provide mobile developers with the tools and documentation that they would need and want to start developing SAFE mobile apps. In the meantime, please refer to today’s dev update for instructions and requirements for running these apps for yourself.

The Authenticator
The first of these applications is the Authenticator. This is the focus of the imminent alpha 2 release and the mechanism by which users securely access the network, while maintaining control of each SAFE applications access to their data.

""

SAFE Messages
The second application is a stripped back and simple mail app. It provides end to end encrypted messaging that uses the public key of the recipient to encrypt the message, ensuring that only the recipient can read its contents.

Alpha 2
The mobile tech preview comes at an exciting time in MaidSafe’s development roadmap, a welcome lead into alpha 2 which we will be releasing next week, on Thursday the 21st of September. This latest alpha will incorporate the Authenticator, a new SAFE Network access mechanism that is network enforced, and as you can see from today’s announcement mobile friendly. We look forward to providing more detail next week.

At MaidSafe, our development approach has been different to many other projects in the space. We have focussed on the hard problems first. This is not a criticism, just recognition of a different approach. Rather than putting out a network that gives little thought to the security of the data on it, or ignores the issue of how it will scale to millions of users, we have prioritised finding solutions to these big questions up front. This may create the appearance that we are moving slower than many of the other larger infrastructure projects in this sector, but in tackling the more challenging issues from the outset, in a methodical and transparent way, we anticipate being well placed to provide the decentralised infrastructure of the future.

SAFE NETWORK DEVELOPMENT SUMMARY – AUGUST 2017

Since the last blog update in May we have published new test networks that are helping us to evaluate much of our recent development work. If you recall, we made several changes to be able to accommodate mobile devices as network clients. These changes included the addition of the Authenticator (a secure access mechanism that is bundled with the SAFE browser) and a new network data type – mutable data – as well as a significant number of changes within the APIs.

Test 17
The current network, test 17, was introduced initially to a small number of forum users, but has since been scaled out in order to accommodate more users. Updated mid July (13th) and re released based on initial feedback (and barring a few minor bugs), test 17 has behaved as anticipated and we’re very encouraged by its stability. We intend to keep a test network in place from now on to enable app developers to develop against this network, rather than resorting to running apps locally.

Forum member Zoki has put together a couple of videos which he has posted on YouTube that demonstrate the use of the Authenticator and the Web Hosting Manager, as well as viewing a few SAFE websites along the way. The Authenticator enables users to create their own network credentials without the involvement of third parties and provides access to the test network.

DNS, but not as we know it
The Web Hosting Manager facilitates users creating their own public ID and service that they can then upload content to and publish for other network users to view. This feature demonstrates a differing approach to the Domain Name Service (DNS) used on the existing Internet that is managed by several DNS providers, such as Dyn and Verisign. Within the SAFE Network, this Decentralised Naming Service, enables web site owners to create their own domain without the involvement and cost of third parties and enables instant publishing of data.

If you are a SAFE Network forum member of trust level 1 and higher, you will be able to participate in this test and play about with these demo apps for yourself, and the following thread contains links to many of the websites published by other forum members.

SAFE email client
The second video produced by Zoki demonstrates the Email application, which is an end to end encrypted messaging app that uses the public key of the recipient to encrypt the message, ensuring that only the recipient can read its contents. Currently using nodes managed by MaidSafe in test 17, SAFE email in future alphas will be decentralised, ensuring that no central entity can view or control access to your communications.

It is important to note that these example applications are intended as tutorials which demonstrate the features of the network while guiding application developers to create more fully featured and polished apps with the SAFE Browser DOM APIs.

Data Chains
What we currently have in test 17 is likely to not have too many more changes before we move to alpha 2. As mentioned above, we are very encouraged with the stability of this network. In tandem with much of the work above the team has been working on a feature called Data Chains. You may remember from our previous blog post that this is a feature we anticipate will ultimately enable the secure republishing of data should the network ever lose power, as well as providing validation that data has been stored on the network. The team has considered multiple implementation options, and subject to simulation tests, has agreed an approach and have started the implementation. Testing of this new Routing design is likely to be incorporated within alpha 3. For plans beyond this, please refer to our roadmap.

Recruitment
For those who regularly go on our forums you will notice an increasing number of new team members. Recruitment continues to receive significant focus as we scale the team to increase the speed and quality of the network roll out while also spreading the load more evenly across the team. As such, we have brought on board some operations staff at our HQ in Scotland and continue to grow the team overseas, who are currently based anywhere from Australia to Argentina!

We now have 23 people working with the company, but we are still looking for Network Engineers. If you are proficient in Rust, or have experience with C or C++ and have experience within P2P architectures, please visit our careers page for more details on how to apply.

Well, that concludes this update, we really appreciate the continued support of everyone in the SAFE community (investors, testers, forum members). As you know we are doing everything possible to expedite the network rollout and giving you the privacy, security and freedom you all so richly deserve.

Power of the Crowd Series: Number Four

Image: Desi Mendoza
It has been a while since we shared our initial thinking around the challenges facing the internet and we have had some excellent reaction to the discussion so far. We very much appreciate the feedback and it is pleasing to see this is a timely discussion. Everyone from Sir Tim Berners-Lee, Wired and the Economist are all debating the impact and consequences that technology, particularly the Internet, is having on society. There have been many different solutions put forward as the best answer to deep rooted issues such as poverty, inequality and social mobility. We agree the current social and economic model, underpinned by the internet and other technologies, has not benefited everyone equally, but we are not convinced by the proposed solutions, such as universal basic income. Therefore, it is time to put forward our suggested response and open it up for further debate and improvement. As technologists we cannot solve all the complexities, but there are ways to use technology, especially an improved internet, to deliver a fairer, safer and inclusive society.

So how can improvements to our internet infrastructure benefit everyone?
At MaidSafe we believe the solution is community-led, hence why we talk about the Power of the Crowd; but for the crowd to be successful control has to shift from a handful of organisations to individual users and we have to develop an open, incentive-based economic model that rewards participation in a community. Technology will continue to play a sophisticated role, but it should be the enabler, not the source of problems and inequality. Above all those that develop the technology should not be allowed to retain an unhealthy level of control.

We believe this will go a long way to addressing the political/philosophical, rational and emotional debates outlined below.

The Political and Philosophical Challenge
No one has worked out how global societies should move forward in their relationship with technology. A lack of consensus means thinking is being informed by both rational and irrational ideas and uncertainty is becoming the only uncomfortable constant. As technologists we are excited by this uncertainty, but as humans we have instinctive responses to fear and threats, which should not be overlooked. While some describe a future that includes flying cars, autonomous vehicles and neural lace that blur the lines between robots and humans, others see no clear path forwards for themselves and their families. These people are what Guy Standing describes as the Precariat – a new class that has evolved as a result of the rapid advances in technology. This community has no job security, is burdened with debt and living in constant fear of social exclusion. They see robots and artificial intelligence as a threat. They look at the dominance of Google, Facebook and Amazon as unfair. Add to this the growing threat of cybercrime and desire for governments to use mass surveillance in the name of national security and it is easy to see why there is growing frustration. Inherent rights to self-determination, employment, privacy and security are being denied or stripped away.

The response of governments, policy makers and regulators are stuck in the 20th Century at best. They believe mass surveillance powers are the only way to combat cybercrime and terrorism, yet there is no evidence this approach works. To address the rapid address of technology they set up innovation funds to foster economic opportunities for future generations and commission academic bodies to analyse the social impact. Yet they skirt nervously around the big ugly question of control and ownership, particularly that exerted by the internet technology vendors. Jonathan Taplin argues that breaking up Google would lead to the same type of innovation explosion that accompanied the break-up of AT&T. Resorting to regulators always makes markets uneasy but you know there is a problem when even free market advocates like the Economist suggest regulation is required!

Rightly the Economist has identified that it is not technology that defines our current era. It is data and that ceding control of all our data to a few vendors is a bad idea. Furthermore the current regulatory model is not fit-for-purpose as it has failed to keep up with the pace of technological change. The answer is simple. We must switch control back to the user and give the individual the rights, education and skills to make informed decisions about how and when they engage with technology, and those providing products or services via the internet.

The Rational Problem
Perhaps where governments can effectively support this switch in control is to introduce regulation that changes the dynamics of the current internet-led economic model. The most radical answer would be a disbandment of existing intellectual property laws, which the likes of Guy Standing believe concentrate control in the hands of the few. Allowing a small number of companies to hold patents on crucial technologies enables them to defeat competition and maintain regular income flows. This is the key rational economic challenge to overcome. We have to ensure technology does not enhance disparity between the ‘haves and have nots’ but closes the gap.

At MaidSafe, we are sceptical regulation alone can address the economic disparity question. One idea would be that an international governing body oversees the internet and levies a tariff on internet companies, dividing the proceeds between countries to support the expansion of infrastructure and improvement of technical skills. This is unrealistic. Anyone observing the World Trade Organisation attempting to secure agreement on universal trade shows how hard countries find it to set aside national interests.

Another more radical approach is demanding greater adherence from internet companies to the principles of open source and the open web; in particular rebalancing what is considered intellectual property (IP), in order to improve accessibility. It is one of the main reasons why MaidSafe has made the underlying SAFE Network code available under the GPL license and transferred ownership of the underlying and defensive IP to the MaidSafe Foundation, a Scottish charity focussed on fostering education and innovation. Both Jonathan Taplin and Guy Standing talk about the internet companies being the landlords taking rent from those using their IP. We are not suggesting all protection for innovators be removed, but there is an argument that economically we have become over-reliant on patents and should reduce that dependency.

By encouraging the open sourcing of more critical infrastructure technologies it creates the potential for a more even playing field as a start point for those who want access to the internet. Of course the big technology companies will say their business models fundamentally rely on revenue streams from existing products to fund the next generation of products, but they appear to have forgotten that a lot of today’s products and services started out as publicly funded research projects. If commercial companies are going to secure a long-term revenue stream from rentable models then surely they must be encouraged to take a different approach to patents and IP.

More importantly, though it would show willingness from industry to address the even bigger issue of inclusion; despite technologists heralding ever growing numbers of people accessing the internet there are still far too many cut off from its opportunities. Ultimately, this is one issue the policy makers and governments have to address, but adopting a more open source approach can go some way to enabling greater access.
Image Slava Bowman

The Future is a Community-Led Movement
However, we believe the above options do not go far enough. Internet companies, particularly those obliged to report to Wall Street, will always struggle to balance commercial pressures against social good. That is why we have significant doubts about universal basic income, which the technology industry appears to be backing over-enthusiastically. On one level it appears arrogant, suggesting that ‘poor’ people should rely on a form of welfare system to make up for a lack of work. Perhaps we should all be grateful that the top 1% dole out hand outs, but the vast majority of people we know would be offended if their family and future generations had to rely on UBI to get by. It lacks innovative thinking – yes technology will take away jobs, but we also believe it will create new ones and new economic models. Frankly, UBI is not radical enough, borne of traditional approaches to the welfare state.

Our proposal is the network becomes a source of income and economic opportunity based on contribution and participation. Fundamentally it becomes a reward system, where individuals and communities can contribute and feel a sense of accomplishment based on their level of participation. Above all this should be a bottom up approach, led today by communities of like-minded individuals. Network technologies and reward mechanisms are being developed to empower communities to take control of their identities and be more fairly rewarded. This will mean we are less reliant on the dominant internet companies and not waiting for government policy to catch up.

It allows commercial companies still to profit, but it also means users and content producers get to share the spoils. We should be offering users a reward in return for access to their data and we should find innovative ways for users to monetise their computing resources. More and more households and communities will have sophisticated computing equipment which could be a source of capacity that could provide revenue streams when individuals are not working. For example, at MaidSafe we are developing Safecoin, which provides a fair reward and payment mechanism for access to data. Combined with the ability of the SAFE Network to identify the owner of each chunk of data it will be a better way for content producers (artists, bloggers and musicians alike) to receive payment, as well as paying users for access to their spare computing capacity.

The Emotional Challenge
We believe incentivising participation is crucial in addressing the final and most divisive challenge – the ambiguity that the rise of technology has created for many people. Understandably it has led many to react instinctively and angrily to the control of the internet oligarchy. People are worried machines will lead to widespread redundancies and ultimately long-term unemployment with no positive alternatives explained. The only way to address these concerns, which can become very emotive, is to create a community led response. Working together communities should be able to define opportunities, whether they are economic or social. The key is enablement and encouraging groups to work together, which again comes back to rewards and incentives. We already see a lot of this collaborative working in the SAFE Network Forum, which is moderated by members of the community, and MaidSafe is only a contributor.

Using incentives and open source technology will make participation both accessible and beneficial. It will allow groups to work through challenges and create very local solutions. For example, imagine a community-led computing facility that generated income to support the group by offering capacity to the SAFE Network. That income could be shared among the group or used in exchange for products and services with other communities via the platform.

Clearly it is hard to envisage this reality, while the SAFE Network is still in development, but the growth of the SAFE Network Forum emphasises the value of a community-led approach. There is a role for government in supporting these communities, making people aware of them and educating them on ways to participate. This is a central element of the inclusion issue. If governments and education institutions can provide the training and support to help citizens to understand the opportunities this model offers it will empower communities to find their own answers.

However, we should not wait for policy makers to catch up. We have left it to the politicians for too long to come up with the answers and they have failed. We will have far greater influence over our relationship with technology and how it affects our lives if we build a movement that mobilises around our needs. The vision is not one huge amorphous online community, but many different ones focused around common interests and needs, benefiting from open access, being rewarded for participation and being allowed far greater control of our personal data.

One final note to add. While this may seem like a huge and almost unmanageable challenge this is no different to any other stage in history where the pace of technological change has forced a rethink of our approach to society and economics. Take this example:

“The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity, so that solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury.”

This was written in 1890 by Samuel D. Warren and Louis D. Brandeis in the Harvard Law Review. Similar to today’s technology, advances in photography in the late 19th century were seen as seen as hugely disruptive to society. We survived that inflection point. We got some things right and some things wrong. I’m sure with a willingness to take some brave decisions and a community-led approach we will get through this next stage in our relationship with technology.

SAFE Network Development Summary – May 2017

We’ve had quite few requests on social media and on email these past few days requesting updates on development progress. These messages serve to remind us that not everyone has the time or the inclination to read the weekly development updates which we post each Thursday onto the forum. So many projects, so little time! So the intention with this post is to provide a summary of the most recent events and our hopes and expectations moving forward.

Image: Richard Tilney Bassett

Roadmap
The best place to start is our development roadmap, which we updated and published late last week. This web page tries to encapsulate all the complexities of development over time on 1 page so it’s pretty high level, but it is this snapshot view that most people seem to appreciate. You will notice that the roadmap outlines the major aspects of development and a rough indication of the order in which we anticipate tackling them.

You will also notice that we haven’t included timescales. In the past we have provided timescales for ‘launch’ of the network. These have always been wrong despite our best efforts. We have found it difficult to estimate timescales since, we believe, so much of what we have been working on is brand new technology, sometimes completely bespoke, and other times building on the work of other projects. Testing is also interesting, it really helps us understand more about how the network fits together and how it is utilised by our community, but invariably leads to more tweaking and testing with previously unplanned and unknown rework and test durations.

We believe that publishing release dates that have a high degree of uncertainty attached is not helpful to anyone and can cause more frustration than not publishing them at all. Network related development is typically where the biggest black holes are and as we get into incremental development client-side, we anticipate time scales will become more predictable.

Stable decentralised network
In late March we released test 15, a network that incorporated both data centre resource as well as enabling user run vaults. Within this release, users were also able to run the SAFE Browser, Launcher and demo app, which continue to facilitate the storage of private and public data, as well as create public ID’s and publish SAFE websites.

After 3 days of running a stable network without any lost data we realised we had reached an important milestone. While we had done this extensively in private tests, it was fantastic to see it running publicly and see the community reaction to it. Of course, life has a sense of humour and shortly after it became apparent that a script had been written that created fake accounts and filled the relatively small network with data, stopping the creation of new accounts or the uploading of new data. This was really helpful to us as it enabled us to find out what happens to the network when it reaches capacity in a real world setting. The fact that it behaved as expected was reassuring, although we’d be lying if didn’t admit to finding the spam attack a little frustrating. This is of course something that the integration of safecoin would stop, as the requirement to ‘pay’ to store data will make the attack expensive, while the incentive of safecoin to farmers would lead to a significantly bigger network.

What now?
Looking forward we are currently focussed in 3 main areas:

  • Catering for mobile devices.
  • Enabling greater user participation.
  • Improving the resilience and robustness of the network.

Mobile
The patience app developers have shown to this point is soon to be rewarded. The process of converting our APIs away from a REST paradigm to SDKs was essential to cater for mobile devices, as the requirement for REST APIs to maintain state would not have worked with mobile devices that disconnect and reconnect regularly. Users of the SAFE Network will gain access through the Authenticator, a secure gateway that protects user credentials from the application itself. The Authenticator is currently being bundled with the SAFE browser and will enable users to securely authenticate themselves onto the network, or enable them to browse publicly available data without logging in.

To implement Authenticator the team required to add a new data type, mutable data. The new data type improves the network efficiency, saves bandwidth, and provides the granular access control required by mobile platforms.

With mobile devices being so ubiquitous throughout the world, enabling mobile client access to the network, mutable data has been receiving significant focus. From a resource provision perspective, both alpha and beta versions of the network will require laptop and desktop and in time single board computers to earn safecoin when it is released. In time, we will look at enabling mobile devices being able to farm for safecoins when plugged into a power outlet and when in range of WiFi, however, as we will detail below this is not a priority for now.

More alphas
Some of the example applications that have been created are currently being ported to suit the new data type and to be compatible with the new APIs. The team are updating the documentation and are testing the applications using a mock network, and they seem to be far more stable than previous iterations which looks positive. We anticipate alpha 2 will encompass the new Mutable Data type and Authenticator, SAFE Browser DOM APIs and Node.js SDK, along with example apps, tutorials and documentation.

Image: Clint Adair

Alpha 3 will see our focus shift onto enabling a greater number of users to run Vaults from home by integrating uTP. Presently users must TCP port forward, or enable UPnP on their routers which requires a little set up in some cases. Adding uTP support will make for a more seamless process for many while making the network accessible to more users. uTP is used in some BitTorrent protocols and when implemented effectively helps to mitigate poor latency and facilitate the reliable and ordered delivery of data packets.

During this phase we will also integrate node ageing, a feature that make the network more resilient to consensus group attacks. The team will also implement the first part of data chains, a feature that has been planned for a while which it is anticipated will ultimately enable the secure republish of data should the network ever lose power, and to provide validation that data has been stored on the network.

Looking ahead
Beyond alpha 3 we will focus on:

  • Data Chains, part 2.
  • Data republish and network restarts.
  • A security audit of the network
  • Test safecoin
  • Real-time network upgrades
  • Network validated upgrades

As has been the case to this point we will continue to release multiple test nets regularly between each alpha network to prove the technology in a public setting, and to mitigate against the code regressing.

We continue to be grateful to the huge support of the people that take the time to run these networks and report back, you all know who you are!

Developer Case Study – Dsensor

Decentralized Mapping Protocol Project – Dsensor

Continuing our series of case studies highlighting early stage application development on the SAFE (Secure Access For Everyone) Network, Dsensor is being developed by James Littlejohn. James explored various platforms to store and protect the data he would be collecting and decided to use the SAFE Network, because it reflected his belief that the network should not be driven by economics, but be focused first and foremost on the data.

MaidSafe’s fully secure, decentralised approach supported James’ view that knowledge or data should be in the complete control of user. While it is early days, Dsensor’s use of the SAFE Network APIs in its proof of concept form shows its potential as a viable platform for the management of data. James was also attracted to the SAFE Network, because of its strong encryption, and its ability to break data into chunks before scattering it around the decentralised network of nodes. This ensures the highest possible security and privacy for users when combined with the decentralised architecture, which avoids offering hackers central points of attack on a network, as we experience in today’s centralised, server-based model.

Being open source and supported by a strong community in the SAFE Network forum also means James has ready access to experts and potential partners, who can help to build out the application and trouble-shoot any technical questions. In the future James may also explore using safecoin to incentivise participation on Dsensor.

The Problem with Science

James Littlejohn has been involved in entrepreneurial projects since the dot com boom and while investigating opportunities around text mining identified an opportunity for lifestyle linking analytics, particularly in the area of wearable tech. In the course of his evaluation he recognised a broader application to data mining and analysis in the field of scientific and academic research. James assessed a number of drivers, including emerging technologies and changing economic conditions, which were beginning to have an effect on the way research was conducted.

Firstly, walled garden applications such as Facebook and wearable technologies were becoming more prevalent, and while they were a rich source of data on human activity, access to that information was restricted. At a time when the internet is supposed to be democratising many aspects of work and social life this is endangering an important source of information on lifestyle and health patterns, which could benefit societies around the world.

Secondly, the sustained economic impact of the financial crisis was creating significant pressure on public funding for research at a time when it was needed more than ever. Technology and the availability of large amounts of data is leading to opportunities for breakthroughs in a wide variety of academic and research fields. If the funding is not available via traditional public sources then there is an urgent to find new forms of investment. The rise of alternative cryptocurrencies could potentially address this point, offering a new, fairer way to incentivise and reward individuals for participating in research projects. For example, James envisages a scenario where the grant funder might ‘tokenise’ a percentage of their funding money and issue it via a science blockchain (like Dsensor). This would help to ensure the funding could be traced directly ensuring good governance of scientific research projects and fairer access to resources.

The final driver for a new model reflects an on-going debate about the model of peer-reviewed scientific research. For a number of years there has been a recognition of some fundamental weaknesses in the model in areas such as the replicability of research. In a poll conducted by Nature in May 2016 more than 70% of researchers admitted they had tried and failed to reproduce the experiments of other scientists and more than 50% failed to reproduce their own experiments. Of course this is in part due to the nature of frontier scientific research, which is reliant on trial and error, but there are clearly inefficiencies in the process.

Furthermore, there are questions about efficiency of current research models – in 2009 Chalmers and Glaziou identified some key sources of avoidable waste in biomedical research. They estimated that the cumulative effect was that about 85% of research investment – equating to about $200 billion of the investment in 2010 – is wasted. A blockchain provides a potential solution to this reproducibility crisis as Dr. Sönke Bartling and Benedikt Fecher outline in their paper, “Blockchain for science and knowledge creation.” Although scientific research should be delivered at arm’s length from the individual contributors it is ultimately reliant on individual scientists to gather and interpret data without bias. It is also often reliant on finite data sets, controlled samples or clinical trials, thus limiting the ability to cross reference the findings against other data sources.

Given the availability of data via the internet and the rise of automation technologies, such as machine learning, James believes that if individuals have control of their information they can decide to contribute their information to research projects without the interference of third parties such as academics or technology providers. Using automation scientists, academics – and more importantly citizen scientists – can draw data from anywhere in the world beyond the confines of a specific controlled sample and review independently to provide a data driven outcome.

Building A Blockchain for Science Research – A Truth Engine for Mankind

James’ investigation of text mining approaches led him to peer to peer models, which were enabling the owners of data to take control of how and with whom their information was shared.  

It led to the development of Dsensor.org (Decentralized Mapping Protocol), a peer to peer network for science knowledge to be investigated, discovered and shared. It has been based on the principle of science “SenseMaking” and it is designed to evolve peer review to a computational consensus model.  Using Dsensor if a scientist creates a thesis and wants to test it the scientist enters the hypothesis in computational form (called a Dmap in Dsensor speak) . The Mapping protocol then automates the testing of the science, starting by trawling the Dsensor network for relevant data from other peers. That data is then sampled and ‘scored’ based on its prediction power to verify or challenge the thesis until a computation consensus is established.  Science attaining this status then becomes ‘computationally active’ in the network meaning any peer has the ability to tap into the collective knowledge and feed in their own unique sensor data get the insights from the science working for them.

James has the ambitious goal to become a “truth engine for mankind” ensuring science is based on openness, transparency and reproducible results, essentially checking the accuracy of peer review.  Dsensor intends to deliver this outcome by building a network of trustless peers, which has inherit vast complexity making it economically and technically very costly and difficult to corrupt.  Dsensor, currently at proof of concept stage utilises  the Ethereum blockchain, using its cryptography and a variant of the proof of work process to create a technological and mathematical state where even with colluding computers it is impossible to game the system.   Rather than creating complexity using a proof of work Dsensor creates uncertainty using random sampling, in particular the selection of peers from the network and data sampling techniques.  The data sampling techniques are selected by each autonomous peer and the network sampling is a property of the Protocol for selecting peers from a Distributed Hash Table. In theory once the network gains a certain size the economic cost of gaming the network with false sensor data to support a fraudulent scientific computation will become extremely costly.

Additional safeguards include the role of reproducibility in science.  This creates an immutable audit trail or “mapping accounting” entries that support the most truthful science available to the network.  These networks are called GaiaBlocks and are open to be challenged by any peer on the network.  Scoring of the science also provides a rating model for scientific research and individual peers on the network.  Peers with poor outcomes will be demoted in favour of more highly rated scientific computations.