Since the last blog update in May we have published new test networks that are helping us to evaluate much of our recent development work. If you recall, we made several changes to be able to accommodate mobile devices as network clients. These changes included the addition of the Authenticator (a secure access mechanism that is bundled with the SAFE browser) and a new network data type – mutable data – as well as a significant number of changes within the APIs.

Test 17
The current network, test 17, was introduced initially to a small number of forum users, but has since been scaled out in order to accommodate more users. Updated mid July (13th) and re released based on initial feedback (and barring a few minor bugs), test 17 has behaved as anticipated and we’re very encouraged by its stability. We intend to keep a test network in place from now on to enable app developers to develop against this network, rather than resorting to running apps locally.

Forum member Zoki has put together a couple of videos which he has posted on YouTube that demonstrate the use of the Authenticator and the Web Hosting Manager, as well as viewing a few SAFE websites along the way. The Authenticator enables users to create their own network credentials without the involvement of third parties and provides access to the test network.

DNS, but not as we know it
The Web Hosting Manager facilitates users creating their own public ID and service that they can then upload content to and publish for other network users to view. This feature demonstrates a differing approach to the Domain Name Service (DNS) used on the existing Internet that is managed by several DNS providers, such as Dyn and Verisign. Within the SAFE Network, this Decentralised Naming Service, enables web site owners to create their own domain without the involvement and cost of third parties and enables instant publishing of data.

If you are a SAFE Network forum member of trust level 1 and higher, you will be able to participate in this test and play about with these demo apps for yourself, and the following thread contains links to many of the websites published by other forum members.

SAFE email client
The second video produced by Zoki demonstrates the Email application, which is an end to end encrypted messaging app that uses the public key of the recipient to encrypt the message, ensuring that only the recipient can read its contents. Currently using nodes managed by MaidSafe in test 17, SAFE email in future alphas will be decentralised, ensuring that no central entity can view or control access to your communications.

It is important to note that these example applications are intended as tutorials which demonstrate the features of the network while guiding application developers to create more fully featured and polished apps with the SAFE Browser DOM APIs.

Data Chains
What we currently have in test 17 is likely to not have too many more changes before we move to alpha 2. As mentioned above, we are very encouraged with the stability of this network. In tandem with much of the work above the team has been working on a feature called Data Chains. You may remember from our previous blog post that this is a feature we anticipate will ultimately enable the secure republishing of data should the network ever lose power, as well as providing validation that data has been stored on the network. The team has considered multiple implementation options, and subject to simulation tests, has agreed an approach and have started the implementation. Testing of this new Routing design is likely to be incorporated within alpha 3. For plans beyond this, please refer to our roadmap.

For those who regularly go on our forums you will notice an increasing number of new team members. Recruitment continues to receive significant focus as we scale the team to increase the speed and quality of the network roll out while also spreading the load more evenly across the team. As such, we have brought on board some operations staff at our HQ in Scotland and continue to grow the team overseas, who are currently based anywhere from Australia to Argentina!

We now have 23 people working with the company, but we are still looking for Network Engineers. If you are proficient in Rust, or have experience with C or C++ and have experience within P2P architectures, please visit our careers page for more details on how to apply.

Well, that concludes this update, we really appreciate the continued support of everyone in the SAFE community (investors, testers, forum members). As you know we are doing everything possible to expedite the network rollout and giving you the privacy, security and freedom you all so richly deserve.

SAFE Network Development Summary – May 2017

We’ve had quite few requests on social media and on email these past few days requesting updates on development progress. These messages serve to remind us that not everyone has the time or the inclination to read the weekly development updates which we post each Thursday onto the forum. So many projects, so little time! So the intention with this post is to provide a summary of the most recent events and our hopes and expectations moving forward.

Image: Richard Tilney Bassett

The best place to start is our development roadmap, which we updated and published late last week. This web page tries to encapsulate all the complexities of development over time on 1 page so it’s pretty high level, but it is this snapshot view that most people seem to appreciate. You will notice that the roadmap outlines the major aspects of development and a rough indication of the order in which we anticipate tackling them.

You will also notice that we haven’t included timescales. In the past we have provided timescales for ‘launch’ of the network. These have always been wrong despite our best efforts. We have found it difficult to estimate timescales since, we believe, so much of what we have been working on is brand new technology, sometimes completely bespoke, and other times building on the work of other projects. Testing is also interesting, it really helps us understand more about how the network fits together and how it is utilised by our community, but invariably leads to more tweaking and testing with previously unplanned and unknown rework and test durations.

We believe that publishing release dates that have a high degree of uncertainty attached is not helpful to anyone and can cause more frustration than not publishing them at all. Network related development is typically where the biggest black holes are and as we get into incremental development client-side, we anticipate time scales will become more predictable.

Stable decentralised network
In late March we released test 15, a network that incorporated both data centre resource as well as enabling user run vaults. Within this release, users were also able to run the SAFE Browser, Launcher and demo app, which continue to facilitate the storage of private and public data, as well as create public ID’s and publish SAFE websites.

After 3 days of running a stable network without any lost data we realised we had reached an important milestone. While we had done this extensively in private tests, it was fantastic to see it running publicly and see the community reaction to it. Of course, life has a sense of humour and shortly after it became apparent that a script had been written that created fake accounts and filled the relatively small network with data, stopping the creation of new accounts or the uploading of new data. This was really helpful to us as it enabled us to find out what happens to the network when it reaches capacity in a real world setting. The fact that it behaved as expected was reassuring, although we’d be lying if didn’t admit to finding the spam attack a little frustrating. This is of course something that the integration of safecoin would stop, as the requirement to ‘pay’ to store data will make the attack expensive, while the incentive of safecoin to farmers would lead to a significantly bigger network.

What now?
Looking forward we are currently focussed in 3 main areas:

  • Catering for mobile devices.
  • Enabling greater user participation.
  • Improving the resilience and robustness of the network.

The patience app developers have shown to this point is soon to be rewarded. The process of converting our APIs away from a REST paradigm to SDKs was essential to cater for mobile devices, as the requirement for REST APIs to maintain state would not have worked with mobile devices that disconnect and reconnect regularly. Users of the SAFE Network will gain access through the Authenticator, a secure gateway that protects user credentials from the application itself. The Authenticator is currently being bundled with the SAFE browser and will enable users to securely authenticate themselves onto the network, or enable them to browse publicly available data without logging in.

To implement Authenticator the team required to add a new data type, mutable data. The new data type improves the network efficiency, saves bandwidth, and provides the granular access control required by mobile platforms.

With mobile devices being so ubiquitous throughout the world, enabling mobile client access to the network, mutable data has been receiving significant focus. From a resource provision perspective, both alpha and beta versions of the network will require laptop and desktop and in time single board computers to earn safecoin when it is released. In time, we will look at enabling mobile devices being able to farm for safecoins when plugged into a power outlet and when in range of WiFi, however, as we will detail below this is not a priority for now.

More alphas
Some of the example applications that have been created are currently being ported to suit the new data type and to be compatible with the new APIs. The team are updating the documentation and are testing the applications using a mock network, and they seem to be far more stable than previous iterations which looks positive. We anticipate alpha 2 will encompass the new Mutable Data type and Authenticator, SAFE Browser DOM APIs and Node.js SDK, along with example apps, tutorials and documentation.

Image: Clint Adair

Alpha 3 will see our focus shift onto enabling a greater number of users to run Vaults from home by integrating uTP. Presently users must TCP port forward, or enable UPnP on their routers which requires a little set up in some cases. Adding uTP support will make for a more seamless process for many while making the network accessible to more users. uTP is used in some BitTorrent protocols and when implemented effectively helps to mitigate poor latency and facilitate the reliable and ordered delivery of data packets.

During this phase we will also integrate node ageing, a feature that make the network more resilient to consensus group attacks. The team will also implement the first part of data chains, a feature that has been planned for a while which it is anticipated will ultimately enable the secure republish of data should the network ever lose power, and to provide validation that data has been stored on the network.

Looking ahead
Beyond alpha 3 we will focus on:

  • Data Chains, part 2.
  • Data republish and network restarts.
  • A security audit of the network
  • Test safecoin
  • Real-time network upgrades
  • Network validated upgrades

As has been the case to this point we will continue to release multiple test nets regularly between each alpha network to prove the technology in a public setting, and to mitigate against the code regressing.

We continue to be grateful to the huge support of the people that take the time to run these networks and report back, you all know who you are!

Vault network test

Just to keep everyone abreast of the current state of the vault progress, I thought I’d post a quick video of our test network running (you’ll probably need to view it fullscreen, high quality if you want to see it properly).

The test’s very basic – just a single client node creating an account, then storing, retrieving and deleting a chunk of data.  However, this represents quite a significant step forward.  The ability to delete data reliably from a peer-to-peer network was one of the more difficult problems we faced.  There are a few issues here.

A big one is; if a vault is asked to delete a chunk which it’s currently storing, how does it know the request is valid?  The requester has to have originally stored the chunk to be allowed to delete it.  Another biggie; if the same chunk gets stored several times, we only want to keep a maximum of four copies (it’s wasteful otherwise).  This means if a chunk gets stored say 100 times, we only want to really delete it from the network once 100 delete requests have been made.

We looked at a variety of ways to handle these problems, but ultimately we found a way which hardly involves the client (the requester).  It boils down to how the storing is done in the first place.

To store the data, the client passes the chunk to the four vaults with names closest to his own name.  These vaults (let’s call them ClientManagers) then check that the client has enough “credit” left to be allowed to store the chunk, i.e. that he’s got a vault which is doing work for the network.  If so, they pass the chunk to the four vaults with names closest to the name of the chunk.

These guys (we call them DataManagers) then check to see if that chunk has already been stored.  If so, they increase a counter by one and the job’s done.  If not, each DataManager chooses one random vault on the network to do the actual storing (we call these vaults PMID Nodes*) and gives the chunk to them.

Well, that’s a slight lie – each DataManager gives the chunk to the four vaults with names closest to the PMID Node!  Obviously these chaps are called PMID Managers; they manage the account of the PIMD Node.  They credit the PMID Node’s account with the size of the chunk (this increases the rank of that vault and hence allows its owner to get more credit) and then send the chunk to the PMID Node on the last leg of its journey.

The data’s now safe and sound.  This complex mechanism solves several issues – but getting back to the delete problems, it gives us a way to solve both of these.

When the client first stored the data, it gave the chunk to its managers.  At that stage, they logged this fact, so they know definitively whether that same client should be allowed to delete a chunk or not.  First problem solved.

Next, if the client’s request is valid, the ClientManagers pass the request on to the DataManagers.  These were the vaults which kept a count of how many times the same chunk had been stored.  So all they do is reduce the counter by one for that chunk and see if the counter hits zero.  If not, the delete is finished.  If so, they have to tell the PMID Nodes (via their managers) to actually delete the chunk from the network.  Second problem solved!

We had originally thought that an awful lot of the storing/deleting process would involve the vaults signing requests to one another and keeping these for ages and passing about copies of these signed requests.  This was shaping up to be a horrible solution – much more complex than the process described above, but worse, it would be fairly hard on the vaults doing the work and worse still would place significant demands on the clients.

We want the clients to have as little work to do as possible.  The client is maybe a user’s old PC or possibly one of these newfangled phone things I hear so much about.  They need to be kept fast and responsive.  So our fix is to rely on the closeness of vault names.  Vaults can’t pretend to be close to a certain name (that’s for another discussion – but they really can’t) and as a group of four all close to a certain name, they can act as one to manage data; their authority being not four signatures, but rather just the fact that they’re all close.

I’ve only scratched the surface here (well, maybe more of a first-degree burn than a scratch).  We’ve got to consider the vaults all switching off and on randomly, malicious vaults deliberately trying to corrupt the data or spoof the network, vaults accidentally losing the data, small vaults not being able to hold much data, firewalls, Windows vaults talking to Mac vaults (sort of like cats and dogs with access to firearms living together), clients spewing masses of data all at once, and so on.

So I guess the point here is that we have a really nice solution to several complex problems.  Getting to the solution was a long and error-prone path, and we’ve still a fair bit of path left, but now seeing the network doing its thing is pretty exciting.  Well it is for me anyway!

* It stands for “Proxy MaidSafe ID”, but I really don’t want to get into that here!

You get by with a little help from your friends…

SureFile Alpha testers required.

MaidSafe are about to launch a little application that delivers an extremely high level of file encryption for files stored with cloud storage providers such as Dropbox.

The application, albeit compact and bijou and stripped to the absolute minimum, is aimed at security conscious, technical users and we are looking for this type of person to help us test it on Windows 8 (32bit and 64bit) and Windows 7 (32bit and 64bit).

Interested? Email stephen.cosh@maidsafe.net for more information.