SAFE network

Power of the Crowd Series: Number Four

Image: Desi Mendoza
It has been a while since we shared our initial thinking around the challenges facing the internet and we have had some excellent reaction to the discussion so far. We very much appreciate the feedback and it is pleasing to see this is a timely discussion. Everyone from Sir Tim Berners-Lee, Wired and the Economist are all debating the impact and consequences that technology, particularly the Internet, is having on society. There have been many different solutions put forward as the best answer to deep rooted issues such as poverty, inequality and social mobility. We agree the current social and economic model, underpinned by the internet and other technologies, has not benefited everyone equally, but we are not convinced by the proposed solutions, such as universal basic income. Therefore, it is time to put forward our suggested response and open it up for further debate and improvement. As technologists we cannot solve all the complexities, but there are ways to use technology, especially an improved internet, to deliver a fairer, safer and inclusive society.

So how can improvements to our internet infrastructure benefit everyone?
At MaidSafe we believe the solution is community-led, hence why we talk about the Power of the Crowd; but for the crowd to be successful control has to shift from a handful of organisations to individual users and we have to develop an open, incentive-based economic model that rewards participation in a community. Technology will continue to play a sophisticated role, but it should be the enabler, not the source of problems and inequality. Above all those that develop the technology should not be allowed to retain an unhealthy level of control.

We believe this will go a long way to addressing the political/philosophical, rational and emotional debates outlined below.

The Political and Philosophical Challenge
No one has worked out how global societies should move forward in their relationship with technology. A lack of consensus means thinking is being informed by both rational and irrational ideas and uncertainty is becoming the only uncomfortable constant. As technologists we are excited by this uncertainty, but as humans we have instinctive responses to fear and threats, which should not be overlooked. While some describe a future that includes flying cars, autonomous vehicles and neural lace that blur the lines between robots and humans, others see no clear path forwards for themselves and their families. These people are what Guy Standing describes as the Precariat – a new class that has evolved as a result of the rapid advances in technology. This community has no job security, is burdened with debt and living in constant fear of social exclusion. They see robots and artificial intelligence as a threat. They look at the dominance of Google, Facebook and Amazon as unfair. Add to this the growing threat of cybercrime and desire for governments to use mass surveillance in the name of national security and it is easy to see why there is growing frustration. Inherent rights to self-determination, employment, privacy and security are being denied or stripped away.

The response of governments, policy makers and regulators are stuck in the 20th Century at best. They believe mass surveillance powers are the only way to combat cybercrime and terrorism, yet there is no evidence this approach works. To address the rapid address of technology they set up innovation funds to foster economic opportunities for future generations and commission academic bodies to analyse the social impact. Yet they skirt nervously around the big ugly question of control and ownership, particularly that exerted by the internet technology vendors. Jonathan Taplin argues that breaking up Google would lead to the same type of innovation explosion that accompanied the break-up of AT&T. Resorting to regulators always makes markets uneasy but you know there is a problem when even free market advocates like the Economist suggest regulation is required!

Rightly the Economist has identified that it is not technology that defines our current era. It is data and that ceding control of all our data to a few vendors is a bad idea. Furthermore the current regulatory model is not fit-for-purpose as it has failed to keep up with the pace of technological change. The answer is simple. We must switch control back to the user and give the individual the rights, education and skills to make informed decisions about how and when they engage with technology, and those providing products or services via the internet.

The Rational Problem
Perhaps where governments can effectively support this switch in control is to introduce regulation that changes the dynamics of the current internet-led economic model. The most radical answer would be a disbandment of existing intellectual property laws, which the likes of Guy Standing believe concentrate control in the hands of the few. Allowing a small number of companies to hold patents on crucial technologies enables them to defeat competition and maintain regular income flows. This is the key rational economic challenge to overcome. We have to ensure technology does not enhance disparity between the ‘haves and have nots’ but closes the gap.

At MaidSafe, we are sceptical regulation alone can address the economic disparity question. One idea would be that an international governing body oversees the internet and levies a tariff on internet companies, dividing the proceeds between countries to support the expansion of infrastructure and improvement of technical skills. This is unrealistic. Anyone observing the World Trade Organisation attempting to secure agreement on universal trade shows how hard countries find it to set aside national interests.

Another more radical approach is demanding greater adherence from internet companies to the principles of open source and the open web; in particular rebalancing what is considered intellectual property (IP), in order to improve accessibility. It is one of the main reasons why MaidSafe has made the underlying SAFE Network code available under the GPL license and transferred ownership of the underlying and defensive IP to the MaidSafe Foundation, a Scottish charity focussed on fostering education and innovation. Both Jonathan Taplin and Guy Standing talk about the internet companies being the landlords taking rent from those using their IP. We are not suggesting all protection for innovators be removed, but there is an argument that economically we have become over-reliant on patents and should reduce that dependency.

By encouraging the open sourcing of more critical infrastructure technologies it creates the potential for a more even playing field as a start point for those who want access to the internet. Of course the big technology companies will say their business models fundamentally rely on revenue streams from existing products to fund the next generation of products, but they appear to have forgotten that a lot of today’s products and services started out as publicly funded research projects. If commercial companies are going to secure a long-term revenue stream from rentable models then surely they must be encouraged to take a different approach to patents and IP.

More importantly, though it would show willingness from industry to address the even bigger issue of inclusion; despite technologists heralding ever growing numbers of people accessing the internet there are still far too many cut off from its opportunities. Ultimately, this is one issue the policy makers and governments have to address, but adopting a more open source approach can go some way to enabling greater access.
Image Slava Bowman

The Future is a Community-Led Movement
However, we believe the above options do not go far enough. Internet companies, particularly those obliged to report to Wall Street, will always struggle to balance commercial pressures against social good. That is why we have significant doubts about universal basic income, which the technology industry appears to be backing over-enthusiastically. On one level it appears arrogant, suggesting that ‘poor’ people should rely on a form of welfare system to make up for a lack of work. Perhaps we should all be grateful that the top 1% dole out hand outs, but the vast majority of people we know would be offended if their family and future generations had to rely on UBI to get by. It lacks innovative thinking – yes technology will take away jobs, but we also believe it will create new ones and new economic models. Frankly, UBI is not radical enough, borne of traditional approaches to the welfare state.

Our proposal is the network becomes a source of income and economic opportunity based on contribution and participation. Fundamentally it becomes a reward system, where individuals and communities can contribute and feel a sense of accomplishment based on their level of participation. Above all this should be a bottom up approach, led today by communities of like-minded individuals. Network technologies and reward mechanisms are being developed to empower communities to take control of their identities and be more fairly rewarded. This will mean we are less reliant on the dominant internet companies and not waiting for government policy to catch up.

It allows commercial companies still to profit, but it also means users and content producers get to share the spoils. We should be offering users a reward in return for access to their data and we should find innovative ways for users to monetise their computing resources. More and more households and communities will have sophisticated computing equipment which could be a source of capacity that could provide revenue streams when individuals are not working. For example, at MaidSafe we are developing Safecoin, which provides a fair reward and payment mechanism for access to data. Combined with the ability of the SAFE Network to identify the owner of each chunk of data it will be a better way for content producers (artists, bloggers and musicians alike) to receive payment, as well as paying users for access to their spare computing capacity.

The Emotional Challenge
We believe incentivising participation is crucial in addressing the final and most divisive challenge – the ambiguity that the rise of technology has created for many people. Understandably it has led many to react instinctively and angrily to the control of the internet oligarchy. People are worried machines will lead to widespread redundancies and ultimately long-term unemployment with no positive alternatives explained. The only way to address these concerns, which can become very emotive, is to create a community led response. Working together communities should be able to define opportunities, whether they are economic or social. The key is enablement and encouraging groups to work together, which again comes back to rewards and incentives. We already see a lot of this collaborative working in the SAFE Network Forum, which is moderated by members of the community, and MaidSafe is only a contributor.

Using incentives and open source technology will make participation both accessible and beneficial. It will allow groups to work through challenges and create very local solutions. For example, imagine a community-led computing facility that generated income to support the group by offering capacity to the SAFE Network. That income could be shared among the group or used in exchange for products and services with other communities via the platform.

Clearly it is hard to envisage this reality, while the SAFE Network is still in development, but the growth of the SAFE Network Forum emphasises the value of a community-led approach. There is a role for government in supporting these communities, making people aware of them and educating them on ways to participate. This is a central element of the inclusion issue. If governments and education institutions can provide the training and support to help citizens to understand the opportunities this model offers it will empower communities to find their own answers.

However, we should not wait for policy makers to catch up. We have left it to the politicians for too long to come up with the answers and they have failed. We will have far greater influence over our relationship with technology and how it affects our lives if we build a movement that mobilises around our needs. The vision is not one huge amorphous online community, but many different ones focused around common interests and needs, benefiting from open access, being rewarded for participation and being allowed far greater control of our personal data.

One final note to add. While this may seem like a huge and almost unmanageable challenge this is no different to any other stage in history where the pace of technological change has forced a rethink of our approach to society and economics. Take this example:

“The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity, so that solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury.”

This was written in 1890 by Samuel D. Warren and Louis D. Brandeis in the Harvard Law Review. Similar to today’s technology, advances in photography in the late 19th century were seen as seen as hugely disruptive to society. We survived that inflection point. We got some things right and some things wrong. I’m sure with a willingness to take some brave decisions and a community-led approach we will get through this next stage in our relationship with technology.

SAFE Network Development Summary – May 2017

We’ve had quite few requests on social media and on email these past few days requesting updates on development progress. These messages serve to remind us that not everyone has the time or the inclination to read the weekly development updates which we post each Thursday onto the forum. So many projects, so little time! So the intention with this post is to provide a summary of the most recent events and our hopes and expectations moving forward.

Image: Richard Tilney Bassett

Roadmap
The best place to start is our development roadmap, which we updated and published late last week. This web page tries to encapsulate all the complexities of development over time on 1 page so it’s pretty high level, but it is this snapshot view that most people seem to appreciate. You will notice that the roadmap outlines the major aspects of development and a rough indication of the order in which we anticipate tackling them.

You will also notice that we haven’t included timescales. In the past we have provided timescales for ‘launch’ of the network. These have always been wrong despite our best efforts. We have found it difficult to estimate timescales since, we believe, so much of what we have been working on is brand new technology, sometimes completely bespoke, and other times building on the work of other projects. Testing is also interesting, it really helps us understand more about how the network fits together and how it is utilised by our community, but invariably leads to more tweaking and testing with previously unplanned and unknown rework and test durations.

We believe that publishing release dates that have a high degree of uncertainty attached is not helpful to anyone and can cause more frustration than not publishing them at all. Network related development is typically where the biggest black holes are and as we get into incremental development client-side, we anticipate time scales will become more predictable.

Stable decentralised network
In late March we released test 15, a network that incorporated both data centre resource as well as enabling user run vaults. Within this release, users were also able to run the SAFE Browser, Launcher and demo app, which continue to facilitate the storage of private and public data, as well as create public ID’s and publish SAFE websites.

After 3 days of running a stable network without any lost data we realised we had reached an important milestone. While we had done this extensively in private tests, it was fantastic to see it running publicly and see the community reaction to it. Of course, life has a sense of humour and shortly after it became apparent that a script had been written that created fake accounts and filled the relatively small network with data, stopping the creation of new accounts or the uploading of new data. This was really helpful to us as it enabled us to find out what happens to the network when it reaches capacity in a real world setting. The fact that it behaved as expected was reassuring, although we’d be lying if didn’t admit to finding the spam attack a little frustrating. This is of course something that the integration of safecoin would stop, as the requirement to ‘pay’ to store data will make the attack expensive, while the incentive of safecoin to farmers would lead to a significantly bigger network.

What now?
Looking forward we are currently focussed in 3 main areas:

  • Catering for mobile devices.
  • Enabling greater user participation.
  • Improving the resilience and robustness of the network.

Mobile
The patience app developers have shown to this point is soon to be rewarded. The process of converting our APIs away from a REST paradigm to SDKs was essential to cater for mobile devices, as the requirement for REST APIs to maintain state would not have worked with mobile devices that disconnect and reconnect regularly. Users of the SAFE Network will gain access through the Authenticator, a secure gateway that protects user credentials from the application itself. The Authenticator is currently being bundled with the SAFE browser and will enable users to securely authenticate themselves onto the network, or enable them to browse publicly available data without logging in.

To implement Authenticator the team required to add a new data type, mutable data. The new data type improves the network efficiency, saves bandwidth, and provides the granular access control required by mobile platforms.

With mobile devices being so ubiquitous throughout the world, enabling mobile client access to the network, mutable data has been receiving significant focus. From a resource provision perspective, both alpha and beta versions of the network will require laptop and desktop and in time single board computers to earn safecoin when it is released. In time, we will look at enabling mobile devices being able to farm for safecoins when plugged into a power outlet and when in range of WiFi, however, as we will detail below this is not a priority for now.

More alphas
Some of the example applications that have been created are currently being ported to suit the new data type and to be compatible with the new APIs. The team are updating the documentation and are testing the applications using a mock network, and they seem to be far more stable than previous iterations which looks positive. We anticipate alpha 2 will encompass the new Mutable Data type and Authenticator, SAFE Browser DOM APIs and Node.js SDK, along with example apps, tutorials and documentation.

Image: Clint Adair

Alpha 3 will see our focus shift onto enabling a greater number of users to run Vaults from home by integrating uTP. Presently users must TCP port forward, or enable UPnP on their routers which requires a little set up in some cases. Adding uTP support will make for a more seamless process for many while making the network accessible to more users. uTP is used in some BitTorrent protocols and when implemented effectively helps to mitigate poor latency and facilitate the reliable and ordered delivery of data packets.

During this phase we will also integrate node ageing, a feature that make the network more resilient to consensus group attacks. The team will also implement the first part of data chains, a feature that has been planned for a while which it is anticipated will ultimately enable the secure republish of data should the network ever lose power, and to provide validation that data has been stored on the network.

Looking ahead
Beyond alpha 3 we will focus on:

  • Data Chains, part 2.
  • Data republish and network restarts.
  • A security audit of the network
  • Test safecoin
  • Real-time network upgrades
  • Network validated upgrades

As has been the case to this point we will continue to release multiple test nets regularly between each alpha network to prove the technology in a public setting, and to mitigate against the code regressing.

We continue to be grateful to the huge support of the people that take the time to run these networks and report back, you all know who you are!

Developer Case Study – Dsensor

Decentralized Mapping Protocol Project – Dsensor

Continuing our series of case studies highlighting early stage application development on the SAFE (Secure Access For Everyone) Network, Dsensor is being developed by James Littlejohn. James explored various platforms to store and protect the data he would be collecting and decided to use the SAFE Network, because it reflected his belief that the network should not be driven by economics, but be focused first and foremost on the data.

MaidSafe’s fully secure, decentralised approach supported James’ view that knowledge or data should be in the complete control of user. While it is early days, Dsensor’s use of the SAFE Network APIs in its proof of concept form shows its potential as a viable platform for the management of data. James was also attracted to the SAFE Network, because of its strong encryption, and its ability to break data into chunks before scattering it around the decentralised network of nodes. This ensures the highest possible security and privacy for users when combined with the decentralised architecture, which avoids offering hackers central points of attack on a network, as we experience in today’s centralised, server-based model.

Being open source and supported by a strong community in the SAFE Network forum also means James has ready access to experts and potential partners, who can help to build out the application and trouble-shoot any technical questions. In the future James may also explore using safecoin to incentivise participation on Dsensor.

The Problem with Science

James Littlejohn has been involved in entrepreneurial projects since the dot com boom and while investigating opportunities around text mining identified an opportunity for lifestyle linking analytics, particularly in the area of wearable tech. In the course of his evaluation he recognised a broader application to data mining and analysis in the field of scientific and academic research. James assessed a number of drivers, including emerging technologies and changing economic conditions, which were beginning to have an effect on the way research was conducted.

Firstly, walled garden applications such as Facebook and wearable technologies were becoming more prevalent, and while they were a rich source of data on human activity, access to that information was restricted. At a time when the internet is supposed to be democratising many aspects of work and social life this is endangering an important source of information on lifestyle and health patterns, which could benefit societies around the world.

Secondly, the sustained economic impact of the financial crisis was creating significant pressure on public funding for research at a time when it was needed more than ever. Technology and the availability of large amounts of data is leading to opportunities for breakthroughs in a wide variety of academic and research fields. If the funding is not available via traditional public sources then there is an urgent to find new forms of investment. The rise of alternative cryptocurrencies could potentially address this point, offering a new, fairer way to incentivise and reward individuals for participating in research projects. For example, James envisages a scenario where the grant funder might ‘tokenise’ a percentage of their funding money and issue it via a science blockchain (like Dsensor). This would help to ensure the funding could be traced directly ensuring good governance of scientific research projects and fairer access to resources.

The final driver for a new model reflects an on-going debate about the model of peer-reviewed scientific research. For a number of years there has been a recognition of some fundamental weaknesses in the model in areas such as the replicability of research. In a poll conducted by Nature in May 2016 more than 70% of researchers admitted they had tried and failed to reproduce the experiments of other scientists and more than 50% failed to reproduce their own experiments. Of course this is in part due to the nature of frontier scientific research, which is reliant on trial and error, but there are clearly inefficiencies in the process.

Furthermore, there are questions about efficiency of current research models – in 2009 Chalmers and Glaziou identified some key sources of avoidable waste in biomedical research. They estimated that the cumulative effect was that about 85% of research investment – equating to about $200 billion of the investment in 2010 – is wasted. A blockchain provides a potential solution to this reproducibility crisis as Dr. Sönke Bartling and Benedikt Fecher outline in their paper, “Blockchain for science and knowledge creation.” Although scientific research should be delivered at arm’s length from the individual contributors it is ultimately reliant on individual scientists to gather and interpret data without bias. It is also often reliant on finite data sets, controlled samples or clinical trials, thus limiting the ability to cross reference the findings against other data sources.

Given the availability of data via the internet and the rise of automation technologies, such as machine learning, James believes that if individuals have control of their information they can decide to contribute their information to research projects without the interference of third parties such as academics or technology providers. Using automation scientists, academics – and more importantly citizen scientists – can draw data from anywhere in the world beyond the confines of a specific controlled sample and review independently to provide a data driven outcome.

Building A Blockchain for Science Research – A Truth Engine for Mankind

James’ investigation of text mining approaches led him to peer to peer models, which were enabling the owners of data to take control of how and with whom their information was shared.  

It led to the development of Dsensor.org (Decentralized Mapping Protocol), a peer to peer network for science knowledge to be investigated, discovered and shared. It has been based on the principle of science “SenseMaking” and it is designed to evolve peer review to a computational consensus model.  Using Dsensor if a scientist creates a thesis and wants to test it the scientist enters the hypothesis in computational form (called a Dmap in Dsensor speak) . The Mapping protocol then automates the testing of the science, starting by trawling the Dsensor network for relevant data from other peers. That data is then sampled and ‘scored’ based on its prediction power to verify or challenge the thesis until a computation consensus is established.  Science attaining this status then becomes ‘computationally active’ in the network meaning any peer has the ability to tap into the collective knowledge and feed in their own unique sensor data get the insights from the science working for them.

James has the ambitious goal to become a “truth engine for mankind” ensuring science is based on openness, transparency and reproducible results, essentially checking the accuracy of peer review.  Dsensor intends to deliver this outcome by building a network of trustless peers, which has inherit vast complexity making it economically and technically very costly and difficult to corrupt.  Dsensor, currently at proof of concept stage utilises  the Ethereum blockchain, using its cryptography and a variant of the proof of work process to create a technological and mathematical state where even with colluding computers it is impossible to game the system.   Rather than creating complexity using a proof of work Dsensor creates uncertainty using random sampling, in particular the selection of peers from the network and data sampling techniques.  The data sampling techniques are selected by each autonomous peer and the network sampling is a property of the Protocol for selecting peers from a Distributed Hash Table. In theory once the network gains a certain size the economic cost of gaming the network with false sensor data to support a fraudulent scientific computation will become extremely costly.

Additional safeguards include the role of reproducibility in science.  This creates an immutable audit trail or “mapping accounting” entries that support the most truthful science available to the network.  These networks are called GaiaBlocks and are open to be challenged by any peer on the network.  Scoring of the science also provides a rating model for scientific research and individual peers on the network.  Peers with poor outcomes will be demoted in favour of more highly rated scientific computations.

 

Developer Case Study: Project Decorum

During the course of 2016, MaidSafe have been privy to a number of projects that are building on top of the SAFE Network. One such project is Decorum.

What is it?

Project Decorum is currently a research-led project, run by Harmen Klink, a computer science undergraduate at the HU University of Applied Sciences Utrecht in the Netherlands.  He wants to build a social media platform, which gives the user greater control of his or her data and therefore enhanced privacy – rather than today’s model which is centralised around a few service providers.

Project Decorum is currently a proof-of-concept, which Harmen has designed in order to drive a successful crowdsale, which raised over €400,000.  He is aiming to use this investment to further develop the application, aspiring to create a hybrid of the best features of existing major applications, such as Facebook, Reddit and Twitter.  

How does it work?

The core protocol of Project Decorum is a substitute for the missing central coordinator, because the SAFE Network has been designed on the principle of a “serverless” architecture.  It consists of a set of rules that describe where and how conversational data should be uploaded to the SAFE Network. These rules predict where the replies to a particular message on the SAFE Network might end up, no matter where the original is located. This means that all applications and SAFE websites that use this protocol will be compatible with each other, making communication simpler.

On the data level all information is visible and the protocol will organise conversations in a tree structure, where every node of the tree represents a message from a user. Replies to earlier messages will create new branches. This tree structure lends itself well to be represented in a “threaded” format, which is done by many well-known forums and comment plugins. Users will build a user interface to decide what data they see and can create a new root to start a new tree for a new conversation. This can be used to create a forum, a comment section on a blog, a group chatbox, and so on.

In Project Decorum users will own their data and everyone is their own moderator through the use of personal ignore lists. In principle, particular posts or users can be put on such an ignore list. It is also possible to subscribe to one or more ignore lists run by other people. This allows for dedicated and widely accepted moderators to naturally rise up in their respective communities. Active people with sound judgement will be subscribed to as moderators by groups. These people can also collaborate to form a moderator team, and possibly accept donations or even charge for their moderation services. Multiple teams with different rules can be active in the same community if there is demand.

Why is Project Decorum working with MaidSafe?

Harmen chose the SAFE Network for his project for several reasons.  He believes the privacy and security of the platform should be the pre-requisite for any Internet application.  Furthermore the decentralised model offers great scalability and he has found it hard to overload the system.  Additionally, SAFEcoin is a great feature, because of the way it is integrated into the network and offers instant rewards.  This will help to sustain engagement with the platform, as social payments are an important feature increasingly expected by users.  It also offers developers the flexibility to expand tokenisation of other assets to create a crypto-currency to represent all kinds of assets.  

What’s next for Project Decorum?

The next steps for Project Decorum include working on designs to make them more tangible and figuring out the business model.  As APIs for the SAFE Network become available and more stable Harmen will continue development on the protocol.  MaidSafe hope that features such as the automatic reward mechanism for participants will enable Harmen to further develop the usage model for Project Decorum.

Harmen Klink, Founder, Project Decorum

“I believe having access to multiple identities is an important benefit of the SAFE Network, because it reflects the varied identities and roles we play in our personal and work lives. The network of identities forms a web of trust that can be used to distinguish legitimate users from abusive bots. When a real name is coupled to an identity, the strength of the web of trust is also used to show others the likelihood that those two truly belong together. This protects users from becoming victims from impersonification and identity theft.”

The SAFE Network Release Cycle

As you may have gathered from the even greater amount of activity on GitHub (I didn’t think it was possible either) the core of the SAFE Network has been getting tested both internally and externally as we get ever closer to a stable decentralised network.  While details about the team’s development progress continue to flow via the forum, the purpose of this post is to establish the main phases of the impending and iterative release process. There are:

  • Alpha – the core network and fundamental functionality is provided via an initial implementation, this is the first public testing phase.
  • Beta – alpha implementations are reiterated and improved, based on user feedback.
  • Release candidate – the fundamental feature set is stabilised to provide greater security, resilience and efficiency.
  • Release – privacy, security, freedom!

The speed at which MaidSafe will iterate through the alpha testing phase is unknown and will be dependent upon how well the network performs at this stage. However, it is anticipated that having the core network in place will make it significantly easier to test the addition of new features than ever before. Testing against mock networks is only useful up to a point!

There will be several alpha releases, which will commence in simple numerical order, each denoting an incremental improvement on the previous version. For example, as per the roadmap, alpha 1 will come with: public ID management, public and private data storage, vault configuration and desktop installers (64 bit Windows, Mac and Linux). The second alpha iteration will include additional features and will be called alpha 2, and so on.

SAFE Network Fundamentals

The fundamental features, beyond the alpha 1 release, have been defined as:

  • Contact management
  • Messaging
  • Test safecoin
  • Vaults running on ARM architectures
  • Launcher UI
  • Safecoin wallet

The alpha release will gradually implement this functionality in an iterative cycle and provide the features highlighted above. However, this will be the first iteration of these features and development on them will continue until the engineering team are satisfied that the implementation provides the desired functionality. At this point, the network will transition to beta. When in beta, these features will become more elegant, efficient and secure. The release candidate will see the features frozen and further stabilised prior to full release at which point safecoin will go live.

In tandem with this release cycle, both users and developers can expect the ongoing release of APIs that reflect access to ever increasing network functionality, as well as example applications that showcase features of the network to end users and also act as tutorials to developers.

Out of Beta and Moving Forward

Beyond release MaidSafe, either alone or working in partnership with other developers, will start to create some of features below that will offer both developers and end users access to some exciting new tools, such as:

  • Pay the producer
  • Smart contracts
  • Autonomous updates
  • Computation handling

We will provide you with more details on each release as it approaches and hopefully this post has been useful in providing more detail around our planned release cycle.

MaidSafe Development Update

Hello, my name is Ross and I have been part of the MaidSafe team for over 2 years. Until recently, this was within QA; my job has now transitioned into a more customer support focused role and is still evolving as we grow as a company. SAFE forum regulars will know me best for collating and sharing the team’s weekly development updates. I am aware not everyone has the time or the inclination to regularly check the community forum, so I thought it would be useful to provide a less technical overview of our development progress during the last few months, here in the blog.

All of our energy and effort in the last few months has been focused upon delivering a Minimum Viable Product (MVP) as quickly as possible. What will the MVP look like and what will you be able to do with it? The MVP will enable users to install the software and connect to the network from their computer, store and retrieve files, browse sites hosted on the SAFE Network and message other users. Subsequent development sprints will see the addition of other extremely important features, such as Safecoin. We are very close to delivering the MVP and depending on which core developer you speak to, this can be measured in either days or weeks. We are aware that we have been ‘almost there’ for a while now, so allow me to update you specifically on our progress within our core libraries and hopefully this will allow you to come to an informed conclusion yourself.

A huge amount of effort and resource has gone into the core Routing and Crust libraries over the last couple of months. 75% of our engineering capacity has been focused on collating feedback from testing, peer programming to redress unanticipated observed behaviour and delivering a stable, functioning network that behaves as expected. At the same time both of these libraries have been heavily refactored; code refactoring means restructuring / reducing complexity within the code base, without changing the overall logic or behaviour of the code. Within Crust itself the library has been broken down into smaller modules called crates:

rust-utp – a crate that enables you to connect to the network from wherever you are.
service_discovery – discover other instances of your application on the local network.
config_file_handler – create, read and write configuration files.
crust (slimmed down and safer) – reliable peer to peer network connections. One of the most needed libraries for any server-less, decentralised project.

The guys have also been working on simplifying Crust’s API (application program interface) to make it more user-friendly and generally easier to integrate with. Routing was stabilised to allow the dependent crates (all the client modules) to utilise its functionality and interact with it more easily. Both libraries were also thoroughly documented during this period, which is essential for third party developers wanting to build upon the SAFE network.

Meanwhile, the Client guys have been working in parallel getting the upper layers prepared for the release of a stable base, focusing on the Launcher, creating solutions to allow users to bridge seamlessly between the old internet and the new SAFE network and beginning work on real end-user SAFE apps. Firstly, what is Launcher? Launcher is responsible for starting any SAFE Network compatible applications. Launcher acts as a server gateway between the application and the SAFE Network. The applications authorise and exchange data with the SAFE Network when the user allows them to, so in practice this means you only have to share your valuable credentials with Launcher and not every application you use, making this a far easier and massively more secure experience than we have on the current internet. It also means you need remember only one password in order to access all your Launcher enabled apps. This approach also lowers the barrier to entry for third party developers as well and will encourage further innovation on the SAFE platform.

mock

Figure 1 – Example screen of SAFE Launcher on Windows

The team have also hatched a solution that allows you to browse sites on the SAFE Network without needing a browser plugin, just using the SAFE Launcher and your normal internet browser. We believe this will help enormously in terms of encouraging people to first try the network; this has been a topic that has seen a lot of community discussion. The discussion via the forums or the Request For Comment (RFC) process is input we all value very much and something we hope to see much more of as the network gains traction.

A lot of work has gone into creating, finalising and documenting the Launcher API. This API is crucial to the success of the platform in that it is the gateway allowing developers to integrate their products and services to leverage the power of the SAFE Network.

An example of one of the SAFE apps on which the UX and Client Devs are working together is a Messaging app. Below are some mock-ups of what we expect the app to look like.

screen1

Figure 2 – Example screen of SAFE Messaging app
screen2
Figure 3 – Example screen of SAFE Messaging app

When you have been working with CLI (Command Line Interface) examples for so long, the knowledge that well thought out, aesthetically pleasing and functional applications are coming is extremely exciting. Another application currently under development is a Drive VFS (Virtual File System) app; this is essentially a file storage application which will allow users to visualise and manage their files stored across the SAFE Network.

In the next few weeks a Minimum Viable Product (MVP) should be released and publicly available to be tested. We should begin to to see our internally developed and third-party applications become tangible and planning begin for a sprint to implement Safecoin into the network. From a personal perspective I shall endeavour to make these less technical blog updates a regular occurrence and as always your feedback and comments are very welcome.

Until next time…

MaidSafe Development Update

It has been an exceedingly busy summer at MaidSafe and it seems like a good point to recap on what we have working on and how we see things rolling out as we move forward. Since our last major update, when we announced the network running end-to-end, we have been adding some pretty significant features and making several enhancements.

Amongst the highlights, we implemented Unified Structured Data, a change that enables the network to only recognise two primary data types; immutable and structured data. The repercussions for what this enhancement brings to the network is significant. Not only do they allow the reduction of network traffic (reducing load and making it more efficient), it also removes much complexity and enhances the security of the network. It is anticipated that Unified Structured Data will lay the groundwork for features such as smart contracts and global computation. For those looking for more technical detail you can visit the proposal as it was implemented here.

We also recently completed work on the Decentralised Naming System (DNS), essentially the SAFE Network’s version of Internet domains. Keen to avoid many of the issues that we experience with the existing system, such as being centrally controlled by an entity, the SAFE Network provides a way to look up data related to any name. So, no more ‘http’ and a lot more ‘safe:’. At the end of August we released an example application showing this functionality and the example itself can be downloaded here.

Screenshot from 2015-09-02 22-16-30

This was a really exciting development as it enabled us to get more (in addition to the Crust, Self Encryption and Routing examples released early summer) software into the hands of users. In essence, the DNS example enabled users to set up a local network, add files to it (even a website) and then view them using a Firefox extension across multiple platforms. MaidSafe focus a lot on usability and it is therefore great to see comments like this (from Justin Chellis):

“I am not great at things like this, but it worked very easy for me..”

Receiving positive feedback is a real motivator for all at MaidSafe, it’s proof that we are making very clear progress toward our goal of providing the average man, woman and child, access to technology that keeps their digital information safe.

In addition to adding features, we have also had to spend some time going back and tidying up sections of code that were inevitably less than perfect given the pace at which we are working. A technical debt sprint has just finished and we are pleased to see much increased stability in the network. This effort is all being helped by our recently launched bounty program which enables MaidSafe to benefit from the work of several community developers, who between them submitted 12 pull requests over a 2 week period. It’s great to be able to harness the passion of external developers in a way that is mutually beneficial and we are really looking forward to seeing the bounty program flourish.

In the immediate future we are planning the next round of development and there are some big ticket features being planned, these include:

uTP Hole Punching – This will enable the nodes (clients and vaults) to talk to each other beyond NAT boundaries and facilitate users to join and become part of the network by contributing their computing resources. Crowd sourced infrastructure has arrived!

Messaging Infrastructure – This will be a really fun deliverable, but is also a big one. The messaging APIs for inter-node communication would be put in place and potentially the addition of the MaidSafe Public Identity in this iteration. It would allow chat engines and clients to work on SAFE-Network.

Messaging Infrastructure and Launcher implementation – It is anticipated that Launcher would be the only application that users gives their credentials (PIN, Keyword and Password) to. Launcher would then authorise apps on behalf of the user, giving each one of them a sandbox environment to work with. This not only prevents Apps from knowing the user credentials, but also removes the ability for them to tinker with folders/data of other Apps or user’s files and folders.

So, we think that 2 more development sprints will see us reach Dev Bundle 2, a network that anyone can join remotely, store and retrieve data and farm for test safecoin. At this point, it will also be possible for third party developers to start building apps for the network. As we continue the roll out of the network, we will be replacing the existing MaidSafe website with two new sites, one that is focussed toward end users and farmers, and the other to provide developers with a clear channel into using the network.

All in all, very exciting times ahead and we’re very grateful to have you on this journey with us.