Steps toward the new resource economy

(A report from the first Holochain intensive / unofficial ValueFlows festival.)

Last weekend the first ever Holochain intensive was held on the Sunshine Coast in Australia. You'd be forgiven for wondering why that location was chosen - we don't usually get much attention from the tech industry down here compared to places like California - but for whatever reason there's a significant number of Aussies interested and involved in the Holochain ecosystem. Basically it happened here because enough of us came together to make it happen.

I'm dubbing it the "unofficial ValueFlows festival" because REA accounting was particularly dominant over the weekend, front and center in a lot of sessions and the thing that most people there seemed to be focused on. As the resident "expert" I wasn't really prepared for how many of the presentations and workshops I would be facilitating or how many conversations I would be having about it. This is largely due to Holochain's founder Art having been turned on to REA by Bob and Lynn of Mikorizal and championing those concepts to groups interested in building on the platform.

Here he is walking a group of us through ValueFlows:

Art explaining ValueFlows

All in all, it was pretty. fucking. awesome.

Also exhausting! :joy:

Probably the most exciting aspect for me was meeting all the teams interested in resource accounting and experiencing their deep desire to share efforts, avoid duplication and combine forces on building out the core systems everyone will need. They were all enthusiastic to basically jump in and work on modules together without getting tied up in "who pays who" sorts of discussions. All the project leaders seem to see it as sensible and reasonable for their teams to be contributing to public domain projects outside of their own and to fund that work internally. In 15 years in the tech industry I have never seen such strong desire for open collaboration. Time will tell how things eventuate, but it's a super encouraging starting point.

Hopefully I'm not forgetting anyone, but these are the projects and case-studies from the conference who stood out as being committed to co-developing or consuming the ValueFlows system components we are building in Holo-REA:

  • Metavents are building an event management and disaster recovery platform (the two areas are basically attacking the same problems). It's one of the more ambitious projects in the works but they have made a lot of progress. They're also one of the more enthusiastic collaborators, which is fantastic as their system touches on every part of ValueFlows- from the observations layer up through planning to creating and improving blueprints for running events successfully.
  • The community management platform Hylo have intimate connections with Holochain, and plan to assist in maturing the Holochain UI project. The goal is for these efforts to operate like the Open App Ecosystem and contribute to a library of easily remixable UI components. They're also planning to have some involvement with supporting development of the underlying ValueFlows system, which will play a role in upcoming resource sharing features. I plan on doing a couple of days of hacking with them this week to get a better sense of how we can support each other.
  • BuildSort want to disrupt and open up the construction industry supply chain, making the process of planning and building more streamlined and less risky. Their interests largely focus on project management and business process modelling; but there's certainly a lot of overlap with Metavents' requirements.
  • Redgrid are building distributed energy trading systems, using a mutual credit approach to intelligently distribute energy across microgrids. They want to use ValueFlows to manage the accounting of these energy flows in an open and pluggable manner.
  • Humm.earth are building a distributed publishing platform, and have aspirations to create value out of storytelling. Figuring out what exactly that means in terms of REA is going to be one of the more interesting challenges ahead.
  • bHive Bendigo are already well established, but they aim to move their current client/server system over to Holochain as soon as possible. Their village resource sharing platforms are obvious applications for REA-based economies- and indeed, little extension may be necessary to service their needs.

This list doesn't include other existing collaborators such as Producer's Market, who have been constant sources of support and inspiration throughout our journey.

But there's a dark side to all this unbridled enthusiasm and giving. IP ownership is still a complex and murky problem which needs resolving. I used to believe that deploying something publicly under an MIT license was sufficient to keep it out of corporate enclosure, but after hearing the horror stories about Arduino and Mozilla at the Scuttlebutt camp earlier in the year I now realise the legal system is as nightmarish and prone to failure in this area as any other.

Luckily CommonsEngine (Holochain's incubator), the humm.earth team, Producer's Market and many others are working on legal entities and licensing to prevent these problems from arising somewhere down the track; and I look forward to having a stable legal entity we can file all this public domain work under to keep it safe. Still, it's not going to hold me back. This is a solid group of well-meaning people to collaborate with. The planet's on fire... best get on with the work.

Something pretty neat came out of the weekend which should assist other developers and business analysts in understanding ValueFlows and REA accounting. I found myself struggling at times to convey the details of the data model, so the ValueFlows GraphQL spec I have been working on now has a really nice schema visualisation tool bundled with the repo. This was pretty invaluable in exploring the concepts and explaining various particulars to interested parties. There's also a mock API and GraphQL query editor. If you are familiar with nodejs, it's pretty trivial to set these tools up for yourself.

I also got to meet Willem Olding and David Meister from the Holochain team and explore some of their plans for improving the Holochain developer kits. A start has been made on integrating GraphQL, and those features should find their way into the core soon. We'll be working together to ensure the result is as simple as possible while still retaining sufficient power to extend as needed. Others appear to be working on ways to make the Rust GraphQL experience as easy as possible, so the future looks bright.

Next steps for the Holo-REA project

This leads me to a pretty coherent understanding of how everything will fit together and where things will progress from here. Of course, it will all change and evolve over time and none of this is final- indeed, the same can be said of Holochain itself. So with that in mind I'd like to share my thoughts and plans at this part of the journey.

Now that David and myself (well, mostly David) have finished tinkering with our prototype app, it's time to start from scratch on the new version of Holochain. That means learning Rust, which is the first foray into ML-derived languages for both of us- so it's probably going to be a slow road for a while. Even better that we have the assistance of others in the community to help us learn and grow.

I have a pretty good idea of what collaboration with other teams looks like technically- keeping Github as our central point of contact and having them involved as collaborators in our repositories; as well as tagging me in on issues in their projects when they need integration help. This combined with a sensible branching workflow should keep things flowing smoothly. Rust's amazing module system looks to play nicely with git, so setting up cross-project dependencies should be straightforward.

As to the code itself, what I've come away with after the weekend is the assertion that Holochain 'zomes' are the appropriate modular building block of functionality to implement our system in. Holochain apps (or 'hApps') run in separate isolated DHTs, which can be glued together explicitly by users who want to allow connectivity between different apps on their behalf. But these boundaries come with limitations- namely that data from one DHT can't (yet) use data from other DHTs when validating new or updated entries. Even if this does become possible it will likely be awkward and create extra technical complexity such as the need for manual deserialisation and link validation.

Each DHT is composed of one or more of these zomes, which are just partitioned system entrypoints which describe discrete bits of functionality. The nice part about all of this is that if it does become sensible to split these zomes up into separate DHTs, changes should be limited to validation logic. Still, this is unlikely- given the highly interconnected nature of the REA data model it's pretty certain that projects will want 1:1 relationships between most layers of our system.

So long story short, if we build for zomes then it should be easier to iterate without foregoing any flexibility down the track. What zomes / areas of functionality are we planning? I'm glad you asked...

Observations module

This will implement the base layer of ValueFlows, and basically includes the pieces for recording events that have occurred in the real world and tracking resources. For apps which only need to provide supply chain tracking / provenance features, this will probably be the only zome they need to include.

This module also includes some configuration options: namely the types of available actions that events can indicate (eg. "consume", "produce", "give", "receive"); and specifications for the types of resources each app will manage (what units they are measured by, and whether they may be substituted for one another).

Planning module

The next layer up is planning, which has huge utility in project management, business process modelling, community governance and market makers. Essentially these features allow forward planning of events and processes that users would like to happen in the future; and tracking towards the completion of that work.

This layer integrates deeply with the observations module, and if projects require the planning zome they will almost certainly always require the observations zome too— it's not that useful to be able to plan work if you can't track when it's been completed.

Knowledge module

Supplementing the planning zome is the knowledge zome, which provides for the creation and improvement of blueprints for performing different kinds of processes (referred to as 'recipes' in ValueFlows; much like a cooking recipe). Projects like Metavents see this as their holy grail, and (in their case) aim to contribute to a global knowledge commons of best-practise techniques for running events which are sustainable, compliant and smoothly-operating.

Agent module

Holochain and ValueFlows may both be agent-centric architectures, but that doesn't necessarily mean Holochain agents can be used as VF agents directly. Indeed, any ValueFlows hApp must provide the agent zome in order to be truly useful. One feature of this module is to act a bit like a profile for your user, extending the raw identity you may use in other Holochain apps with fields specific to ValueFlows. It also provides methods for querying the other zomes' data in ways which are relevant to each agent, for example locating all the resources which you currently have in your inventory.

In addition the agent zome enables the configuration of different relationship types between agents (eg. "member", "peer", "trading partner", "friend"). Apps may choose to implement additional functionality which depends on the presence of such relationships.

The other important feature which may be implemented separately or as part of this module is to provide for group agents which can represent organisations. Projects like BuildSort require such functionality in order to represent businesses which have capability-based access control in order for members to perform actions on their behalf. There's a much deeper discussion of this in a document Bob Haugen recently penned, but suffice it to say that this is a large and complex body of work that will need to be explored with the Holochain core team to avoid overlap.

Holochain and the semantic web

There's one big omission in what's been outlined above, and addressing it could be a long and windy road. It's the issue of categorisation- i.e. "how we name things". More or less every item in ValueFlows, from resources to events to processes, has a "type" which clarifies what sort of thing it represents. The ValueFlows spec makes no decisions about what types of things each economic network can have within it, because every economy is different. Besides, that work has already been done on a variety of topics and industries— it's called the semantic web.

In early stages, we expect that developers using Holo-REA will simply hard-code these ontologies based on the domain of their application. But that's far from ideal, because if people are left to define their own ontologies willy-nilly then these won't be compatible with any other economies and people will have to awkwardly join them together at the seams. And the whole point of ValueFlows is to make localised economic networks interconnected and integrated.

Thus, a library of globally shared semantic web ontology zomes seems like a needed piece for our project and many others. There needs to be a way of categorising and organising information in Holochain that is compatible with the ways we categorise and organise information elsewhere.

There is certainly no shortage of awesome prior art in converting semantic web data between different formats and databases, so getting SemWeb ontologies into Holochain seems like a pretty straightforward task. But getting semantic data back out again is a much more involved process, and something which needs to involve the Holochain core team in order to happen.

The ideal situation would be preconfigured DHT entrypoints which expose RDF data to the outside world, so that SemWeb data processors can integrate with Holochain apps via the HOLO hosting network. But in order to do this, every field of every record designed within a Holochain app needs to have semantic meaning assigned.

That seems like a difficult problem which will likely be burdensome for developers— and burdensome things tend to be ignored. If only Ceptr's semantic trees were an intrinsic part of Holochain this wouldn't be an issue. Since it is set to one day supercede Holochain, perhaps there will be an upgrade path there. Perhaps not. That stuff is a long way off.

For now, perhaps those apps which care deeply about data ownership can at least get to a point where records can be exported from Holochain as SoLiD documents. That would be a truly portable, human-centered approach to data ownership. Until then, it's probably fair to say that "data ownership" on Holochain just means "being able to port my data to my own separate Holochain hApp".

And that's sort of OK. After all, you can run your own hApp which you own and control without any external interference. It's certainly miles better than Facebook or Google's data exports, which you can't do anything with without serious technical chops.

*

As an addendum, it should be noted that there are parallel projects underway on other platforms. People continue to fork and experiment with the original 'network resource planning' (NRP) software built by Sensorica— the Django system which pioneered the concepts and case-studies from which the entire ValueFlows community emerged.

Other modern distributed systems are being built as part of the Open App Ecosystem project- the pluggable GraphQL server infrastructure created by Luandro, Ivan and others has an "economic sentences" layer which backs onto Scuttlebutt's replicating database engine. Ivan and Mayel are also working with Moodlenet on an ActivityPub implementation, which as a web-native version of ValueFlows should lead to very strong integration with semantic web technologies.

The power of GraphQL is what will ensure convergence of all these systems into a coherent network of economic networks, hence my interest in a reference specification. With the incredible ecosystem and tooling around GraphQL we can use the reference schema to validate our implementations and ensure their ability to be combined with minimal effort. It also enables us to "write once, deploy anywhere" when it comes to the schema- the reference is the implementation— systems wishing to implement ValueFlows need only define the bindings between their data storage layer and the spec.

I suppose the one-sentence summary is: these efforts are all really starting to snowball. 2019 looks to be the year of Open Value Networks!