SCRF Interviews | Computer Aided Governance - Michael Zargham and Jeff Emmett (Ep. 18)

Part 6 of our 7-part series with the team at BlockScience features a conversation with Michael Zargham, founder and CEO of BlockScience, and Jeff Emmett, Communication Lead at BlockScience. They delve into a variety of areas, including:

  • The definition and role of “computer-aided governance” (CAG) in web3
  • How CAG compares to computer-aided tools for other disciplines
  • The ways that BlockScience utilizes CAG in its projects
  • The necessity of mapping the terrain before using CAG tools
  • The inherently political nature of policy decisions that allocate resources
  • How CAG makes the decision-making process more visible

Michael Zargham (MZ) is the founder and CEO of BlockScience. Dr. Zargham holds a Ph.D. in systems engineering from the University of Pennsylvania, where he studied optimization and control of decentralized networks. He works on the mathematical specifications of blockchain-enabled software systems, with a focus on observability and controllability of the information state of the networks.

Jeff Emmett (JE) is Communications Lead at BlockScience, and has a background in electrical engineering from the University of Waterloo. He has been one of the early advocates for the establishment of safe and ethical Token Engineering in the web3 space. Along with Michael Zargham and Griff Green, he co-founded the Commons Stack to build out a toolkit of modular components that can be used for polycentric governance of DAO ecosystems.

The interview was conducted by Eugene Leventhal , Executive Director of SCRF.

Audio : Spotify, Apple, Spreaker

Video : YouTube

Key takeaways from the conversation:

What does “computer-aided governance” mean?

MZ : The vast majority of design for complex systems of any kind involves computer-aided design tools. These tools can do many specific things, but at bottom, they expand the human’s ability to make judgments, to play out the consequences of certain design decisions, or to optimize with respect to subjective choices made by humans. Humans make the choices, but computers aid humans in doing so.

JE : Computer-aided governance explores the trade-off space of inclusive and informed governance. That’s because we can have informed small groups or inclusive large groups, but there’s a tradeoff between inclusivity and how informed members can be in a group. Computer-aided governance allows us to be more inclusive while maintaining a high level of information among agents in these communities.

MZ : With computer-aided governance, people can engage more deeply in some algorithmic policy-making decisions because they’re able to interact with models, or descriptions of models, as social objects, where the assumptions of the models are being questioned, and the consequences and changes of those assumptions are what are being discussed, rather than people simply arguing past each other about conclusions drawn from completely different—or at least uncompared—models of the world.

So it sounds like computer-aided governance is more inclusive because the tools allow humans to visualize (and thus be informed by) things that were beyond their ability to envision on their own.

MZ : Yes, but with a couple of other features. In a computer-aided design paradigm, the natural properties of whatever system is being explored are typically baked-in. So if what is being explored is the world of physical solid objects, the geometry and statics are baked in; they simply work the way they work in the real world, and the human designer can explore the design space more effectively without needing to think about those things very much. It’s more difficult in governance where the policy-making or rule-making functions tend to bleed into the world that those policies or rules are acting on. In governance, the world model is subject to high degrees of uncertainty, so it’s important that the tool allows evaluation of the possible outcomes of a policy change under a wide range of possible responses.

JE : It’s also important that people be in control of these societal-scale systems or institutions which often have large-scale data inputs that, in theory, could feed good governance processes and allow healthy decisions to be made for communities. BlockScience is basically trying to bring something analogous to an air-traffic controller’s display terminal into community governance.

MZ : One of the things that is extremely common in the nascent web3 governance space is the assumption that if all parties agree on something, that means it’s going to be a good decision. Not to diminish the importance of people agreeing on what to do, but it’s important to understand that having a collective decision-making procedure is distinct from having a substantively good decision. That’s often not evident until later. So closing the loop between what has been chosen, why it was chosen, and what the expected outcome was supposed to be; and what has actually happened; and then feeding that back on future governance decisions, is actually really important to this process of collectively governing.

This is just a basic learning process for the organization in building the capacity to make decisions and steer the organization. Having visibility into what’s happening, and the ability to agree even on a subset of the facts that are used to substantiate or evaluate a decision, requires data science and data engineering because of the digital nature of the underlying systems.

JE : To illustrate what Zargham was saying, look at this very zoomed-out view of a computer-aided governance loop:

It has a Pilot’s Loop shown beside an Engineer’s Loop. The Pilot Loop shows computational projections, leading to insight, leading to extensive iteration over the informed decisions and designs that emerge from those insights. And then these iterative findings and discoveries drive the evolution of the computer-aided governance system itself.

MZ : Also, viewers should hesitate to over-extrapolate on this because it works differently on multiple levels. You can use this loop from an operational perspective to build models and make day-to-day decisions, but governance decisions require being zoomed out of it in a different way.

How much of that is the result of choosing a model in advance and then creating a loop by going in and out of it? Is your approach centered around having a concrete model in mind alongside your governance thinking?

MZ : The computational models and data models are of secondary importance. The first order understanding is phenomenological. Every learning and adaptive system has certain feedback loops that can close between intent and outcome. The Pilot’s Loop and Engineer’s Loop are basically a “thinking” loop and a “doing” loop. As we move from governance to computer-aided governance, we’re talking about data models that are effectively ontologies. They embody machinery for collecting data, machinery for extrapolating what’s going on, machinery for inferring the consequences of changes to policies.

BlockScience does quite a bit of this with cadCAD, but it’s not exclusively a cadCAD thing. We built cadCAD to allow us to do this with open source tech as opposed to proprietary tech. Control theory literature teaches a way of algorithmic decision-making that naturally leads one to work with software tools like MATLAB. But to be blunt, MATLAB is closed source and very expensive to use. Since BlockScience had a use case for software like that, the company was naturally drawn to building open source tooling to substitute for it.

For those thinking seriously about computer-aided governance, what’s the first step? Do people immediately download cadCAD and start model-hunting?

MZ : No, you start with mapping. The first rule for any decision-making apparatus, not even necessarily software per se, is to sufficiently understand things like who the stakeholders are in the system, what decisions are actually available to be taken, and how those decisions are related to the stakeholders in terms of their preferences and expectations. As the terrain gets mapped out, a sense of what tooling would be helpful will naturally emerge. BlockScience definitely advocates for starting simple. Although people often see complex outputs from BlockScience, that’s not out of a desire for complexity per se. That comes out of multiple iterations until we have something that fits the problem at hand

JE : This illustration shows the computer-aided governance map and process:

It’s an iterative process as well. As previously mentioned, it starts with observing the system, understanding the stakeholders, the flows, the resources, and the tools that might be available. Starting at 12 o’clock and working around the circle, the designer or engineer observes systems, asks questions, formally maps them, models them, presents what came out of that iteration in community debates, puts the results of those debates into the model, and then starts the process all over again.

BlockScience originally used this work in some organic communities; for example, the Swarms in 1Hive. That model was used to iterate on their token issuance parameters over time. It involves computational projections of how certain parameters will impact the token economics, which are then discussed with the community, and then implemented in their smart contract deployments. It’s quite exciting to see those processes emerge naturally.

MZ : It’s really common, however, to see people jump straight to the model and skip the “observe and map” phase. The mapping component is really about understanding the lay of the land. In the absence of that contextual understanding or social science workflow, the technical and quantitative workflows are often untethered from reality. That can be dangerous, because computer tools can lead groups to agree to do something that is out of alignment with one or more stakeholder groups simply because the modeling makes it feel objective. The gap between reality and the model is always going to be there. Keeping mapping and modeling as separate steps is one way to remind designers to keep track of the reductions inherent in the model, and to account for that when interpreting the results.

What are the right first steps for operationalizing the journey through the CAG map? Can an independent individual within a DAO kick this process off, or does there need to be a working group of specific stakeholders?

MZ : From personal experience, it starts with being a community member, someone who has taken the time to gain expertise in various methodologies. For example, BlockScience has worked with Ellie Rennie from RMIT on digital ethnography and participatory digital ethnography. Sometimes when there is a lack of clarity about the relationships of various stakeholders, a useful first step can be making a map and dropping it in the governance forum to provoke more focused discussions among members of the community. In fact, it can be interesting to have a lineage of maps that represent how an organization has changed over time, and then possibly link those back to more quantitative observables about that system.

What sorts of mindsets or culture shifts tend to be part of the journey from non-computer-aided governance to computer-aided governance?

JE : Some of the tools that have been mentioned can play a central role here. The field in general is pushing more towards open data, open models and modeling tools, leading to a new area of data-driven societal discourse where citizen scientists are pulling open models and open datasets to weigh in on policy choices that can lead us into more sustainable directions.

MZ : It’s important to remember that designers and engineers are already using computers as part of their design and analysis behavior. Sometimes they’re not used to the best effect because the designers don’t think about the implications of their measurement apparatus, or the implications of entrenching policy decisions in software, or the implications of imposing something normative like the concept of neutrality on an algorithm. One of the challenges with governance is that pretty much any policy decision, no matter how seemingly banal, is political, because when resources are being routed in a particular direction, people are going to have strong opinions or preferences about it.

Infrastructure is not neutral; it has politics in it. This isn’t necessarily bad, but it’s important to acknowledge that there’s a subjectivity in all prioritization. Any design that contains constraints has tradeoffs built into it. It’s important to own that fact, and not hide behind a claim to objectivity which will end up hiding the subjectivity of the choices made. This prevents the community from engaging in real discourse about the trade-offs. It also denies future participants the right to question those decisions, which is one of the most problematic aspects of a “neutrality” narrative.

Systems do need to change over time, however modestly. To get a sense of BlockScience’s paradigm for how much change versus how little change is good, look at a paper co-written with Kelsie Nabben entitled “Aligning ‘Decentralized Autonomous Organization’ to Precedents in Cybernetics.” That paper characterizes the trade-off space between extreme immutability, which is very robust but not very resilient, with extreme mutability, which is resilient because it has a lot of adaptive capacity, but isn’t very robust because an agent can leverage that mutability to permanently capture the system.

You’ve watched computer-aided governance play out with a number of real organizations. What sorts of pathways have been more successful in practice, or at least more frequently seen?

MZ : There’s been a tendency for a small number of people to be deeply engaged, and the change we need to see is a more forward expression of one’s own positionality and reflexivity. Those concepts are broadly drawn from the social science tradition, where things are inherently more inseparable from their observer. But those concepts are also present in cybernetics as the concept of “second-order cybernetics.”

It’s a recurring theme here that crypto is in the first-wave cybernetics modality. But the introduction of the computer-aided governance map and more attention focused on our own perspectives and the effects of our perspectives on our recommendations, or even on the votes that we cast, are actually going to be really helpful. In practice, computer-aided governance means that we can have a plurality of models that are treated as perspectives on the same system rather than saying “We’ve found the one true model and therefore we know what to do.”

Each individual human has a unique view or a unique slice on the same system, and since those humans are the actual stakeholders—the people for whom the system is being built and maintained—we can reason about how computers can be used to represent those perspectives, or a synthesis of those perspectives. But remember that there may be many such syntheses, and that they may be in conflict with each other. Honestly, the really useful information is almost always in the differences, not in the places where they’re the same.

The profession needs to get to a place where people are more forward about acknowledging that they made specific assumptions, that they have a specific background, and based upon those things, they think about a given topic this way and make their recommendations. And then, after doing comparative analysis between various individual recommendations, there’s a natural ebb and flow between cohesion and fragmentation.

What is the role of computer-aided governance tools in making decisions and outcomes more legible?

MZ : We want to avoid situations where the default is “trust me.” That invites a technocratic governance paradigm. Success in a computer-aided governance paradigm is one in which the experts, who almost certainly will exist in those systems, are still accountable to the constituents of their systems. The expectation is that their models are going to be visible, and that at least the assumptions and conclusions of those models are sufficiently legible that any constituent can understand them. Getting that scientific process into the open source modality is itself a big leap.

It’s never going to be the case that everyone in a community wants to take part in governance, nor would it be fair to ask everyone to do it, since most people participate in so many systems. But anyone who does look into a given governance process should be able to easily see the assumptions and the results that were rendered from it.

So many people are coming into this space with the desire for tech to obviate human challenges and abstract away the need to have hard conversations with colleagues. So it’s encouraging to see tools like yours that don’t claim to model all the complexity away and provide easy answers, but rather help to make information more visible, and to allow for more nuanced interaction in these ecosystems.

MZ : Computer-aided governance is about enabling good governance with computers, and good governance involves respecting the tensions between people with honest differences, creating space, and understanding that there isn’t a “right answer,” but nonetheless coming together to make a decision. Because of the real-world uncertainty in organizations, sometimes the best that can be said is, “We thought that was going to work. It didn’t. What do we do now?” There’s no need to point fingers or complain. Perhaps the right information wasn’t available at the time, but there’s always the possibility of making a change in the future based upon new learnings.

JE : What was discussed today represents only the first steps into a huge design and analytics space. It’s a really interesting combination of blockchain, data science, and open-source tools that allow us to connect myriad streams of data. Ultimately, this will enable data-driven decision-making on what today seem intractable policy dilemmas.


This is so interesting to me. Years ago, I wanted to create a program that would simulate policy. So policymakers could input their decision, and you would get some 5 - 50 year time lapse of what could happen and variations on that. I love the view that you give of “Pilot’s Loop” vs. “Engineer’s Loop” in your model - you likely can’t have a good decision without constantly updating the model and inputting real-world stuff (events, new assumptions, canceling old assumptions, etc.).

Interested in testing cadCAD or any other analytics tool BlockScience is creating for governance.


Good job on your research,I really appreciate.buh Increasingly, Computer Aided Governance CAD is a methodology used in DAOs and decentralized projects to improve the quality of decisions made by incorporating evidence from data, analysis, and various types of modeling including integrative “complex systems” simulation modeling.In general, the types of evidence used during the course of DAO governance decision making, from most to least “value added” are: Reports / Analyses; Integrative Simulation Models; Supervised AI / ML / Statistical Models; Clustering and Unsupervised Learning; Data Visualization; Raw Data

Imagine that there’s a proposal before the DAO governance to whether to include a new asset type. Then, the sub-section dedicated to this decision would have a number of posts including high-level reports built on earlier, foundational analyses. Take for example the “Token Economics Report” which combines and discusses evidence including models and analyses previously posted to the forum.

Within the DAO governance community, normative behaviors are assumed to arise and be reinforced, such as voters informing their vote with the best evidence, and also favouring evidence that clearly references source material (analysis, models, or raw data).


Thank you @rlombreglia For this wonderful piece of interview.

My key takeaway is about block science.


A lot of decisions taken in DAO or even any traditional organisation are based on not being exposed to all the right information and the result of such decisions might take projects in the wrong direction so with the help of block science it’ll show them the result of such decisions taken. And with block science this will surely expand the humans ability to make judgments and also make choices when voting, but computers aid humans in doing so.
This should be a very welcomed development for DAOs in general.
Its exciting how decision making will be based on data science modeling and data engineering research.
Computer aided governance; are tools that expand the humans ability to make judgments, to play out the consequences of certain design decisions, or to optimize with respect to subjective choices made by humans. Humans make the choices, but computers aid humans in doing so.
Apparently this will make decision making or voting speedily done.
This means that its supposed to consider and include all relevant sides and bias in the organizations.

1 Like

This was a fascinating conversation. It’s clear we can use data and computational tools to improve our collective decision-making.

One of the recurrent themes of this episode was iteration or loops. We can see loops in both of the images above. The key feature of these loops is also central to cybernetics: feedback. By consciously building in feedback loops, computer-aided governance processes can be validated for accuracy, and improved where they fall short.

I am reminded of Ray Dalio’s investment strategy. Ray would write down what he’s investing in and why then track how his investments performed and if they squared with his reasons for investing. He would then update his investment strategy and play it back on historic data to see how it would have performed before deciding whether to implement it or not. By doing this, Ray made data-driven computational modeling popular on Wall Street and grew his firm to be the largest and most successful in history.

Today, BlockScience might say that Ray had a Computer Aided Governance process. By building feedback loops, integrating real-world data, and projecting both forward and backwards Ray was able to glean information about how his decisions would perform. In the same way, CAG could enable DAOs to glean information about how their decisions would perform.

This is of special importance to DAOs because of the trade-off mentioned in the episode between inclusivity and information. While in traditional organizations information can remain in the hands of high-context individuals and therefore be more exclusive, Computer Aided Governance processes allow high inclusivity and highly informed membership. Any member of the DAO could access a dashboard that lets them know how the DAO is performing and how it has been impacted by certain decisions.

In the extreme case, CAG could allow DAOs to evolve into a primarily computational organization. With effectively mapped models, a DAO member’s participation could be limited to updating the inputs and assumptions baked into a model such that the model then makes decisions that affect the DAO. This is most readily possible in Investment DAOs. However, the guests on the episode point out that for most DAOs CAG will improve the ability of human decision-making.

The two key pain points for implementing CAG in a DAO would likely be acquiring the right data, and making effective tradeoffs when mapping/modeling.

For example, in the case of a DAO like Gitcoin, it would be extremely helpful to have a CAG process that could model how effective Quadratic Funding is at allocating funding to the highest-impact grants. While there is data on which grants were allocated funding, there is limited data on the impact of those grants. It’s therefore difficult to understand just how effective QF is using quantitative data. A few open questions I have is: how do we quantify impact across projects as different as open-source libraries (like ethers.js) and advocacy organizations (like coin center)? Would it be fair to segment decision-making based on projects which can be reasonably compared (we could then scrape download rates for open-source libraries)?

Mapping/modeling effectively is difficult because it requires making key assumptions that strip away complexity without losing too much granularity. It would be especially useful to make these decisions with those experts who best understand the workings of a system. The strength of feedback loops also really shines in improving this process.

I really enjoyed this conversation and it’s shifted how I think about how DAOs should be making decisions. There is so much room to improve governance through increasing how informed DAO members are, making assumptions/conclusions transparent to all, and maximizing our use of computational tools.


Wonderful and informative interview! It had me wondering if the framework of complex system (emergent, non-linear properties due to local effects, etc) is compatible with cybernetic (1.0/2.0) thinking, which (seems to me) concerned with linear and circular (homeostatic) dynamics.

For my eth, complex systems theory seems a more productive path forward, at least on the empirical research side. Token engineering, however, might start from a cybernetic model but at some point complexity takes over and a new set of tools are needed. I think we call this set of tools capable of handling dynamics at scale “governance” :sunglasses:


The podcast episode and article are thought-provoking and insightful to what BlockScience does. Though I’ve never used cadCAD, I see it used all the time in the Token Engineering Commons. I now feel I have a better understanding of what it’s used for and why we would need computer assistance in governance. I also appreciated UmarKhanEth’s response and his example using Ray Dalio’s feedback-enhanced investment strategy. It’s interesting to see how various industries came across the power of closed-loop systems and what innovations sprang from them. I particularly want to address current issues in product management and how computer-aided governance can be used to strengthen this process.

Ever since entering the web3 space, I’ve noticed quality value creation has been at the center of critics’ debate. Questions like “why would I use a crypto wallet?” or “what makes web3 better than what we have today?” have implied a need for outstanding improvements in web3 when plotted against web2. I think product management is a field that can experience outsized results from web3 business models. My reasoning is similar to BlockScience’s in that any great decision comes from a well-informed source. Today, product managers conduct user interviews, research, outreach, and even paid time slots to get authentic feedback from customers. The Computer-Aided Governance Map and Process resembles a design thinking process of sorts. My main idea is what if you could bundle product feedback and iteration with the governance process?

DAOs must create value in order to survive. The largest DAOs today are those that help govern and maintain DeFi protocols with high TVL. The most successful DAOs are DAOs that have a product. Tying product management with governance seems almost inevitable when looking at many governance forums for large DAOs today. By tying in product management with governance, DAOs can make higher-quality decisions and design better products & iterations.

A product manager can enable this by taking advantage of the Observe and Ask phases of the CAG map. Governance sentiments should be the focus here, but product reviews and feedback can also be useful to gather. For this example, we can use Uniswap. There’s a feature right now that is still being debated about enabling a new fee tier to swaps. With CAG, we can observe the forums and ask users about what people feel regarding this decision. During this stage, a product manager can easily work with the governance researchers to ask product-focused questions. “Would you use a 10% fee tier for a swap? Why or why not?” “How would you want this option presented to you?” “How does the current fee tier mechanism make you feel? Is there anything you don’t like about it?” And so on.

This way, in the Map and Model phases, the product manager can propose changes to the UI or even a new version of the protocol. These can be presented, debated upon, enacted, and monitored just like any other governance decision. However, these decisions won’t come from the inherent politics you get with subjective options; rather, they’ll come from the data-driven approach of a product-minded person who can show the community that the given direction is the right direction to go toward.

One of my favorite observations in the article is the assumption in many web3 communities that a consensus decision is by default a good one simply because the majority agrees. The same applies to product managers. I do appreciate how computer-aided approaches can help us get closer to the right answer, but what constitutes a “good” decision in a DAO? Moreover, how do we find the best decision in a DAO?

This is the frontier I think will encompass the next cybernetics modality waves as stated in the article. Using mathematical tools like decision trees and graphs to optimize DAO decision-making will be a well-sought-after tool. Until we can mathematically represent the decision matrix and payoffs of each decision, we’ll be stuck with “best guesses.” I have confidence these will improve over time as well.

1 Like

How Computer-aided Governance Can Solve Governance Issues

Recently, in the US, we had an election. It was a messy affair. A lot of electoral shenanigans and political fraud occurred over the course of the election.

Much of the rascality is now being exposed by the Twitter files.

Then there are the great policies and bills, with great benevolent titles, that are supposed to make our lives better… You know the ones… the ones they pass that always makes things worse. Yeah, those bills.

No wonder people are looking for a silver bullet for governance.

The question is: Is computer-aided governance the silver bullet?

In this post, we are going to learn about computer-aided governance (CAG) and its potential benefits.

What is computer-aided governance?

Computer-aided governance is a data-driven approach to decision-making, using a special computer-aided design tool called CadCAD. It’s a new and interesting approach to modeling, testing, and simulating the complex relationships and decisions that govern an organization or state.

Is this the beginning of Skynet?

CAG is intended to be a supplement, not a replacement, for human decision-making. It enhances human decision-making by modeling and testing policy outcomes before they are implemented, providing a “playout” of the consequences that would or could result from a policy.

The program can also simulate policy alternatives to identify the optimum governance solution.

This gives humans the data and analysis needed to make informed decisions about complex issues, while preserving their judgment and creativity (So the fear of this being the beginning of Skynet or the Matrix is unfounded… For now).

In some ways, this approach is similar to the current A.I. art and writing assistant craze going on now, making art generation and writing a lot easier.

What are the benefits of computer-aided governance?

Computer-Aided Governance (CAG) has several benefits that make it an attractive option for governments around the world.

In addition to efficiency and cost savings, CAG can also provide governments with a powerful tool for gathering data and insights into how their various systems and policies are performing.

This information can then be used to improve operations and create better policies.

Furthermore, since CAG is automated, it can help eliminate human errors that can lead to costly mistakes or delays.

Lastly, CAG can help to increase public engagement by providing citizens with access to real-time data on their government’s activities. This increased transparency can help to build trust between citizens and their government.

(Now back to reality)

What about Web3?

The blockchain is ideal for this analysis because of its closed ecosystem. The data is transparent and can be easily mined. Data is more easily obtainable and parsable than in the “real world.”

Many of the benefits listed above apply to Web3.

DAOs and other web3 communities can use this tool to help them make better governance choices.

For example, a DAO might vote on implementing an on-chain-identity system (ICO*) that will require a KYC (know your client) process in order to participate. If CadCAD was utilized, the DAO could see that the proposed KYC process could:

  • result in loss of revenue
  • cost too much to implement
  • require too many resources
  • delay the project by weeks or even months; etc.

In sum, CAG is a powerful tool for making decisions. Governments—if they were smart, really cared for their people, and had integrity—would greatly benefit from using this tool.

DAOs and other Web3 communities would also benefit from the transparency and increased engagement in their communities because of a more informed constituency.

  • Initial Coin Offering