Part 6 of our 7-part series with the team at BlockScience features a conversation with Michael Zargham, founder and CEO of BlockScience, and Jeff Emmett, Communication Lead at BlockScience. They delve into a variety of areas, including:
- The definition and role of “computer-aided governance” (CAG) in web3
- How CAG compares to computer-aided tools for other disciplines
- The ways that BlockScience utilizes CAG in its projects
- The necessity of mapping the terrain before using CAG tools
- The inherently political nature of policy decisions that allocate resources
- How CAG makes the decision-making process more visible
Michael Zargham (MZ) is the founder and CEO of BlockScience. Dr. Zargham holds a Ph.D. in systems engineering from the University of Pennsylvania, where he studied optimization and control of decentralized networks. He works on the mathematical specifications of blockchain-enabled software systems, with a focus on observability and controllability of the information state of the networks.
Jeff Emmett (JE) is Communications Lead at BlockScience, and has a background in electrical engineering from the University of Waterloo. He has been one of the early advocates for the establishment of safe and ethical Token Engineering in the web3 space. Along with Michael Zargham and Griff Green, he co-founded the Commons Stack to build out a toolkit of modular components that can be used for polycentric governance of DAO ecosystems.
The interview was conducted by Eugene Leventhal , Executive Director of SCRF.
Audio : Spotify, Apple, Spreaker
Video : YouTube
Key takeaways from the conversation:
What does “computer-aided governance” mean?
MZ : The vast majority of design for complex systems of any kind involves computer-aided design tools. These tools can do many specific things, but at bottom, they expand the human’s ability to make judgments, to play out the consequences of certain design decisions, or to optimize with respect to subjective choices made by humans. Humans make the choices, but computers aid humans in doing so.
JE : Computer-aided governance explores the trade-off space of inclusive and informed governance. That’s because we can have informed small groups or inclusive large groups, but there’s a tradeoff between inclusivity and how informed members can be in a group. Computer-aided governance allows us to be more inclusive while maintaining a high level of information among agents in these communities.
MZ : With computer-aided governance, people can engage more deeply in some algorithmic policy-making decisions because they’re able to interact with models, or descriptions of models, as social objects, where the assumptions of the models are being questioned, and the consequences and changes of those assumptions are what are being discussed, rather than people simply arguing past each other about conclusions drawn from completely different—or at least uncompared—models of the world.
So it sounds like computer-aided governance is more inclusive because the tools allow humans to visualize (and thus be informed by) things that were beyond their ability to envision on their own.
MZ : Yes, but with a couple of other features. In a computer-aided design paradigm, the natural properties of whatever system is being explored are typically baked-in. So if what is being explored is the world of physical solid objects, the geometry and statics are baked in; they simply work the way they work in the real world, and the human designer can explore the design space more effectively without needing to think about those things very much. It’s more difficult in governance where the policy-making or rule-making functions tend to bleed into the world that those policies or rules are acting on. In governance, the world model is subject to high degrees of uncertainty, so it’s important that the tool allows evaluation of the possible outcomes of a policy change under a wide range of possible responses.
JE : It’s also important that people be in control of these societal-scale systems or institutions which often have large-scale data inputs that, in theory, could feed good governance processes and allow healthy decisions to be made for communities. BlockScience is basically trying to bring something analogous to an air-traffic controller’s display terminal into community governance.
MZ : One of the things that is extremely common in the nascent web3 governance space is the assumption that if all parties agree on something, that means it’s going to be a good decision. Not to diminish the importance of people agreeing on what to do, but it’s important to understand that having a collective decision-making procedure is distinct from having a substantively good decision. That’s often not evident until later. So closing the loop between what has been chosen, why it was chosen, and what the expected outcome was supposed to be; and what has actually happened; and then feeding that back on future governance decisions, is actually really important to this process of collectively governing.
This is just a basic learning process for the organization in building the capacity to make decisions and steer the organization. Having visibility into what’s happening, and the ability to agree even on a subset of the facts that are used to substantiate or evaluate a decision, requires data science and data engineering because of the digital nature of the underlying systems.
JE : To illustrate what Zargham was saying, look at this very zoomed-out view of a computer-aided governance loop:
It has a Pilot’s Loop shown beside an Engineer’s Loop. The Pilot Loop shows computational projections, leading to insight, leading to extensive iteration over the informed decisions and designs that emerge from those insights. And then these iterative findings and discoveries drive the evolution of the computer-aided governance system itself.
MZ : Also, viewers should hesitate to over-extrapolate on this because it works differently on multiple levels. You can use this loop from an operational perspective to build models and make day-to-day decisions, but governance decisions require being zoomed out of it in a different way.
How much of that is the result of choosing a model in advance and then creating a loop by going in and out of it? Is your approach centered around having a concrete model in mind alongside your governance thinking?
MZ : The computational models and data models are of secondary importance. The first order understanding is phenomenological. Every learning and adaptive system has certain feedback loops that can close between intent and outcome. The Pilot’s Loop and Engineer’s Loop are basically a “thinking” loop and a “doing” loop. As we move from governance to computer-aided governance, we’re talking about data models that are effectively ontologies. They embody machinery for collecting data, machinery for extrapolating what’s going on, machinery for inferring the consequences of changes to policies.
BlockScience does quite a bit of this with cadCAD, but it’s not exclusively a cadCAD thing. We built cadCAD to allow us to do this with open source tech as opposed to proprietary tech. Control theory literature teaches a way of algorithmic decision-making that naturally leads one to work with software tools like MATLAB. But to be blunt, MATLAB is closed source and very expensive to use. Since BlockScience had a use case for software like that, the company was naturally drawn to building open source tooling to substitute for it.
For those thinking seriously about computer-aided governance, what’s the first step? Do people immediately download cadCAD and start model-hunting?
MZ : No, you start with mapping. The first rule for any decision-making apparatus, not even necessarily software per se, is to sufficiently understand things like who the stakeholders are in the system, what decisions are actually available to be taken, and how those decisions are related to the stakeholders in terms of their preferences and expectations. As the terrain gets mapped out, a sense of what tooling would be helpful will naturally emerge. BlockScience definitely advocates for starting simple. Although people often see complex outputs from BlockScience, that’s not out of a desire for complexity per se. That comes out of multiple iterations until we have something that fits the problem at hand
JE : This illustration shows the computer-aided governance map and process:
It’s an iterative process as well. As previously mentioned, it starts with observing the system, understanding the stakeholders, the flows, the resources, and the tools that might be available. Starting at 12 o’clock and working around the circle, the designer or engineer observes systems, asks questions, formally maps them, models them, presents what came out of that iteration in community debates, puts the results of those debates into the model, and then starts the process all over again.
BlockScience originally used this work in some organic communities; for example, the Swarms in 1Hive. That model was used to iterate on their token issuance parameters over time. It involves computational projections of how certain parameters will impact the token economics, which are then discussed with the community, and then implemented in their smart contract deployments. It’s quite exciting to see those processes emerge naturally.
MZ : It’s really common, however, to see people jump straight to the model and skip the “observe and map” phase. The mapping component is really about understanding the lay of the land. In the absence of that contextual understanding or social science workflow, the technical and quantitative workflows are often untethered from reality. That can be dangerous, because computer tools can lead groups to agree to do something that is out of alignment with one or more stakeholder groups simply because the modeling makes it feel objective. The gap between reality and the model is always going to be there. Keeping mapping and modeling as separate steps is one way to remind designers to keep track of the reductions inherent in the model, and to account for that when interpreting the results.
What are the right first steps for operationalizing the journey through the CAG map? Can an independent individual within a DAO kick this process off, or does there need to be a working group of specific stakeholders?
MZ : From personal experience, it starts with being a community member, someone who has taken the time to gain expertise in various methodologies. For example, BlockScience has worked with Ellie Rennie from RMIT on digital ethnography and participatory digital ethnography. Sometimes when there is a lack of clarity about the relationships of various stakeholders, a useful first step can be making a map and dropping it in the governance forum to provoke more focused discussions among members of the community. In fact, it can be interesting to have a lineage of maps that represent how an organization has changed over time, and then possibly link those back to more quantitative observables about that system.
What sorts of mindsets or culture shifts tend to be part of the journey from non-computer-aided governance to computer-aided governance?
JE : Some of the tools that have been mentioned can play a central role here. The field in general is pushing more towards open data, open models and modeling tools, leading to a new area of data-driven societal discourse where citizen scientists are pulling open models and open datasets to weigh in on policy choices that can lead us into more sustainable directions.
MZ : It’s important to remember that designers and engineers are already using computers as part of their design and analysis behavior. Sometimes they’re not used to the best effect because the designers don’t think about the implications of their measurement apparatus, or the implications of entrenching policy decisions in software, or the implications of imposing something normative like the concept of neutrality on an algorithm. One of the challenges with governance is that pretty much any policy decision, no matter how seemingly banal, is political, because when resources are being routed in a particular direction, people are going to have strong opinions or preferences about it.
Infrastructure is not neutral; it has politics in it. This isn’t necessarily bad, but it’s important to acknowledge that there’s a subjectivity in all prioritization. Any design that contains constraints has tradeoffs built into it. It’s important to own that fact, and not hide behind a claim to objectivity which will end up hiding the subjectivity of the choices made. This prevents the community from engaging in real discourse about the trade-offs. It also denies future participants the right to question those decisions, which is one of the most problematic aspects of a “neutrality” narrative.
Systems do need to change over time, however modestly. To get a sense of BlockScience’s paradigm for how much change versus how little change is good, look at a paper co-written with Kelsie Nabben entitled “Aligning ‘Decentralized Autonomous Organization’ to Precedents in Cybernetics.” That paper characterizes the trade-off space between extreme immutability, which is very robust but not very resilient, with extreme mutability, which is resilient because it has a lot of adaptive capacity, but isn’t very robust because an agent can leverage that mutability to permanently capture the system.
You’ve watched computer-aided governance play out with a number of real organizations. What sorts of pathways have been more successful in practice, or at least more frequently seen?
MZ : There’s been a tendency for a small number of people to be deeply engaged, and the change we need to see is a more forward expression of one’s own positionality and reflexivity. Those concepts are broadly drawn from the social science tradition, where things are inherently more inseparable from their observer. But those concepts are also present in cybernetics as the concept of “second-order cybernetics.”
It’s a recurring theme here that crypto is in the first-wave cybernetics modality. But the introduction of the computer-aided governance map and more attention focused on our own perspectives and the effects of our perspectives on our recommendations, or even on the votes that we cast, are actually going to be really helpful. In practice, computer-aided governance means that we can have a plurality of models that are treated as perspectives on the same system rather than saying “We’ve found the one true model and therefore we know what to do.”
Each individual human has a unique view or a unique slice on the same system, and since those humans are the actual stakeholders—the people for whom the system is being built and maintained—we can reason about how computers can be used to represent those perspectives, or a synthesis of those perspectives. But remember that there may be many such syntheses, and that they may be in conflict with each other. Honestly, the really useful information is almost always in the differences, not in the places where they’re the same.
The profession needs to get to a place where people are more forward about acknowledging that they made specific assumptions, that they have a specific background, and based upon those things, they think about a given topic this way and make their recommendations. And then, after doing comparative analysis between various individual recommendations, there’s a natural ebb and flow between cohesion and fragmentation.
What is the role of computer-aided governance tools in making decisions and outcomes more legible?
MZ : We want to avoid situations where the default is “trust me.” That invites a technocratic governance paradigm. Success in a computer-aided governance paradigm is one in which the experts, who almost certainly will exist in those systems, are still accountable to the constituents of their systems. The expectation is that their models are going to be visible, and that at least the assumptions and conclusions of those models are sufficiently legible that any constituent can understand them. Getting that scientific process into the open source modality is itself a big leap.
It’s never going to be the case that everyone in a community wants to take part in governance, nor would it be fair to ask everyone to do it, since most people participate in so many systems. But anyone who does look into a given governance process should be able to easily see the assumptions and the results that were rendered from it.
So many people are coming into this space with the desire for tech to obviate human challenges and abstract away the need to have hard conversations with colleagues. So it’s encouraging to see tools like yours that don’t claim to model all the complexity away and provide easy answers, but rather help to make information more visible, and to allow for more nuanced interaction in these ecosystems.
MZ : Computer-aided governance is about enabling good governance with computers, and good governance involves respecting the tensions between people with honest differences, creating space, and understanding that there isn’t a “right answer,” but nonetheless coming together to make a decision. Because of the real-world uncertainty in organizations, sometimes the best that can be said is, “We thought that was going to work. It didn’t. What do we do now?” There’s no need to point fingers or complain. Perhaps the right information wasn’t available at the time, but there’s always the possibility of making a change in the future based upon new learnings.
JE : What was discussed today represents only the first steps into a huge design and analytics space. It’s a really interesting combination of blockchain, data science, and open-source tools that allow us to connect myriad streams of data. Ultimately, this will enable data-driven decision-making on what today seem intractable policy dilemmas.