SCRF Interviews | Formal Methods in Complex System Design - Jamsheed Shorish and Michael Zargham (Ep. 16)

Part 4 of our 7-part series with the team at BlockScience features a conversation with Senior Research Scientist Jamsheed Shorish and BlockScience Founder and CEO Michael Zargham. They delve into issues around: Comment

  • The definition of “formal methods”
  • The need for both validation and verification
  • The generalized dynamical systems (GDS) approach
  • Composablity and its relation to energy functions

Jamsheed Shorish (JS ) is a computational economist by training and now Senior Research Scientist at BlockScience. He works mainly on incentive mechanism design, control theory, and the ways that entities are created to make desirable properties come into existence. He joined BlockScience three years ago. Comment

Michael Zargham (MZ ) is the Founder, Chief Engineer, and CEO of BlockScience. His background is in electrical and systems engineering. He did his Ph.D. in optimization and control of distributed networks. Today’s subject—mathematical methods applied to system design—addresses some of the same core concepts that arise from applying systems engineering methodologies to large-scale automated systems.

The interview was conducted by Eugene Leventhal, Executive Director of SCRF.

Audio: Spotify, Apple, Spreaker

Key takeaways from the interview:

What is the difference between understanding how a system behaves vs. knowing how to get that system to behave in that way?

JS : This is reminiscent of a parable about engineering a system that the engineer then interacts with on a regular basis. Her interactions become the focal point for what she believes she knows about the system, even though she engineered it. Take a mobile phone. Everyone uses one, everyone knows exactly what properties they would like it to have. People have worked very hard to generate those properties. But most people have no idea how to engineer a system that guarantees that a mobile phone will have the properties everyone wants it to have. So the separation between familiarity with an engineered system and being able to figure out how to craft that system to exhibit the specified properties is part of what formal methods in complex systems have to offer.

MZ : Try thinking about things in three views, starting with the thing itself that’s being interacted with, followed by another view that zooms out, and a third view that zooms in. The view that zooms in shows a collection of systems that make up the thing that’s being interacted with.

A good analogy is an analog watch. Hundreds of millions of people have worn one, but zoom in on that watch and very few people understand the numerous sub-systems of springs and gears that make it work as a time-keeping device. Zooming out from the watch gives a view of the system the watch exists in—the person who wears it, the reasons this person needs to keep track of time, the whole social context that person exists in, etc. The object is the sum of a bunch of components that make up a system, but it also does not exist in a vacuum.

When talking about mechanism design, the focus tends to be on the inner workings, the mathematical bits, but many times the actual value of the thing is not really determined by those mechanisms per se, but rather how those mechanisms fit in a broader context that’s more open-ended and less deterministic. By moving through those three views, one gets a better overall sense of the requirements for designing something.

What’s BlockScience’s definition of “formal methods”?

JS : First, systems should be understood as dynamic, which means they change over time. For the last two centuries, differential equations were used to measure how simple systems changed incrementally over time. This led to a field of research called generalized dynamical systems, which looks at the ways that much more complicated structures change over time, with or without inputs from the environment or from other users.

MZ : Using these tools to create mechanisms that exhibit properties at the system level generally involves concepts like “invariance,” and “identities.” Identity is usually used to describe something absolute, such as no double-spends on Bitcoin. Invariance is closer to an energy function that has some variation in it, but the variation is controlled. In web3 there are constant function market makers (CFMMs) that are implemented in smart contracts. They have energy level sets, often referred to as “invariances” or “constant functions” even though they’re not strictly constant; they contain small fees that generate returns for liquidity providers. They work like energy conservation equations: even though they’re not equal they have a well-defined source, and as they enter or leave the system, you’re conserving their flows. You haven’t gained or lost any of it; you’ve kept track of exactly where it’s come from and where it’s gone.

How does this relate to the narrower context of “known execution of code”?

MZ : There are two distinct but related concepts in systems engineering. “Known execution of code” is one of them. They’re called “validation and verification.” The “three views of a system” mentioned earlier come directly from this. Validation is outward-looking, verification is inward-looking.

Most of the time, people who build software use verification as their formal method. They talk about it at great length, using “code correctness provers” to verify that their code produces exactly the results intended. But that type of verification leaves out the whole question of whether that was actually the right code to execute. The complementary concept of validation concerns itself not with whether the code was implemented correctly, but rather whether what was implemented was the correct thing.

In any system, validation and verification should always be considered a pair. The process is:

  • Define the requirements
  • Define the specifications
  • Implement the specifications
  • Verify that the implementation matches the specifications
  • Validate that the implemented thing fulfills the desired requirements or functions in that system.

That’s a combination of the inward and outward views. When stacking subsystems into complex systems with layerings or even networks of dependencies between components, there’s no way to get a meaningfully composed system if this validation/verification nesting has to be done on every level. It’s the essence of applying formal methods in system engineering.

Much of our own work resides in determining what kinds of “emergent” properties are actually guaranteeable, and under what conditions. Those are things that can also be proved, but they look outward. And so the validation side of the equation is also a bigger part of the design workflow. If one is looking inward, then something must have already been built or specified. If something is already specified, that implies that someone designed it, which implies that they had some intent. If one is only doing verification-style formal methods, that’s not the same as doing formal methods in design; that’s only doing formal methods in testing.

Coming back to generalized dynamical systems (GDS), how does this relate to the composability of system entities and the properties they can be expected to have?

JS : Zargham was talking about how you get that flow from what is being done internally to what is being seen on the outside. That all hinges on the idea that the pieces being looked at individually also fit together correctly. Systems should lend themselves to being examined in detail that way. The sub-systems of an analog watch can be examined individually, but ultimately the desire is for the whole watch to start ticking. The integration should occur at a level where the properties being looked for on the outside are actually being fulfilled.

One of the tools that allow nesting or embedding the GDS approach comes from a part of applied mathematics called category theory. That allows one to see how a whole collection of individual entities are connected to each other. When the relations between collections of objects become evident, when their linkages are obvious, then they are going to “compose” in a consistent fashion. From there it’s possible to say what the properties of a composed system will be.

Mechanism design turns this around a bit to say, “given the properties that the designer wants to have, what types of composability families or classes exist that are feasible, versus the ones it would be preferable to have but turn out not to be operationally possible?”

MZ : Jamsheed emphasized composability, and that’s something that’s pretty basic to BlockScience’s work. Much of the firm’s design work relies on being able to make assumptions about the properties of a system with relatively weak or even borderline non-existent behavioral assumptions. That’s what gives the work strong composability properties, and that’s also what led BlockScience to focus on things like energy functions.

As in the physical world, if energy assumptions are baked into systems in ways that can’t be violated, then it doesn’t matter why a given thing is being done. The laws of conservation of mass and energy can’t be broken just because someone came up with a cool new idea. If concepts analogous to energy functions are baked into mechanism design, then the properties associated with those energy functions are the ones that will stand up to composition, even for unknown future systems. Let’s talk a bit more about why designers can weaken their assumptions about user behavior and, thus, aim to design mechanisms that are a bit more resistant to changes in the behavioral context in which agents make decisions.

Can you also comment on the relationship between system designers to users who interact with their systems?

JS : When designing a system, the designer often does not have the luxury of knowing precisely how an individual is going to interact. Nor does the designer have the luxury of saying that users are going to understand the system well enough to be able to game it and get any money left on the table. So instead, designers look for those properties that are agnostic to the behaviors that generate actions and look at what actions are admissible from one period to the next. That allows the system to retain the flexibility to incorporate any behavior that allows that interface between what people think and what they’re allowed to do. So instead of assuming a behavior, the designer says, “What’s the gateway into the system?” And that can be made as restrictive or as broad as we like.

MZ : The inward-looking vs. outward-looking dichotomy comes into play here, too. Most systems don’t fail on their components; they fail on their interconnections. No matter how carefully engineered a given part may be if it’s not clipped in correctly to the larger thing it is part of, it’s still going to create a mess.

In cryptosystems, people can carefully engineer and even formally verify their chain-code and then embed it in a social system for which it’s just not a good fit, and then because they have no mechanism on which they can iterate on that mechanism, they’re stuck with some “code is law”-style norms that prevent them from evolving the policies that are enshrined in that code. They might be beautifully crafted, but they didn’t fit in the outer system.

And the thing about the outer system is that it will also tend to evolve naturally. BlockScience argues for having rules in smart contracts for how and when which parties can actually call transactions to mutate. And there’s always the possibility of having to declare a v.2 and do a migration of some kind. These kinds of governance mechanisms are both important, and important not to use too liberally. There’s an idea of timescales that comes from the control theory and systems engineering literature that asks not just “what’s going to change?” but “on what timescale is it going to change?” This actually goes back to the discussions of governance. The design, development, deployment, and operations of these automated infrastructures are faced with the question, “what is ongoing maintenance and governance?” One of the critical elements of dynamical systems theory is that timescales matter.

Regarding the life cycle of dynamical systems, how do we know what that life cycle looks like? Is it a consistent pattern of flow? And how do we know where are we in this life cycle?

JS : It’s not a snapshot in time, but something seen across time. If one views the larger system as being composed of subsystems that exist at different points in time, then the same formal representation of a composed system that is being examined at that moment also extends further into time. What the governance system will look like in the future may not be known, but it is known that some particular protocol parameters, for example, should be the ones that are adjustable. Then, no matter what governance process might occur in the future, the choices that can be made won’t destabilize that part of the system. So it’s not just what’s admissible at one point in time, but how does that admissibility change over time? Let’s examine what users are allowed to change in the protocol, and safeguard certain features and ensure that they will be propagated, no matter what we might see in the future.

MZ : There’s a deep tradeoff between making a system mutable or immutable. Going back to the bridge analogy, there will be seasonal changes, there will be differences among people using it, but the bridge should remain constant. All of the effort that goes into it during its lifetime will be care and maintenance, upkeep. So there’s a timescale associated with maintaining the extant properties of the system, and a different timescale associated with that system adapting to its context. Adding a new lane to a highway is an adaptation, not just maintenance. Periodic repaving is purely maintenance. The timescale of maintenance is whatever it needs to be to maintain the designed properties.

Before we assume that this applies only to physical systems, look at something like automatically closing collateralized debt positions in a system like Maker or RAI. That’s clearly maintenance. But using levers like changing the interest rates to make the system achieve desired goals goes beyond simple maintenance.

Can you talk about a specific project where you’ve applied some of these methods?

JS : RAI is a really good example because it’s an extension of the idea of having collateralized debt. But what they’ve been able to do in a novel way is attempt to automate the way in which the redemption of debt for the collateral occurs. They’ve adopted a particular kind of automation, a specific controller that examines the environment around it and decides how to move the price accordingly. The whole idea relies upon the formal understanding of how the response to things coming from outside is even articulated into the system so that what is being seen as an outside impact is actually trustworthy. Individuals who are actually using the system can look away from it, and when they look back, nothing has changed so radically that it has rendered their own investment opportunity either moot or incorrect.

MZ : A really good starting point for understanding game theory in a more general way that’s associated with formal methods in engineering would be to look into differential games. The canonical teaching model is called “the homicidal chauffeur.” It’s a dynamic game so that the actors have admissible actions such as laws of motion, how they’re allowed to move, and they have competing objectives. It flows, it has dynamics. That’s how engineers are typically first introduced to game-theoretic constructs. The idea that games can be represented as dynamical systems rather than as flat snapshots of the world is a really important one in transitioning from toy models that illustrate basic concepts or theories to actually designing and testing things.

Again, formal methods do not just include verification. If one is doing systems engineering in a formal way, there must be both verification and validation. For those interested, check out a series of videos by a guy named Brian Douglas about systems engineering.

7 Likes

This was an insightful post on formal methods in complex system design. The “inward-looking” and “outward-looking” analogy helped me understand validation and verification more. Proper validation can have spill-over benefits like customer satisfaction metrics and easier marketing. The article does well in emphasizing how important mechanism design is in validation.

I liked the example MZ described when reflecting on the laws of conservation of mass and energy. It puts mechanism design into an easily-digestible form. In mechanism design, an agent (principal) chooses the payoff structure for the game other players are participating in. Whether the payoff structure is in equilibrium or not depends on the principal, but with new analysis strategies like compositional game theory [1], we can model equilibriums as new games are made. Compositional game theory also uses category theory (mentioned in the post as a tool for generalized dynamical systems). It would be interesting to see examples of category theory being applied in complex system design. Examples are always great for comprehension.

The conservation example given can be expanded to see the solar system as a mechanism / economic system of sorts. Let’s take life on Earth as an example. Here, energy and mass preserves itself. Energy and mass are transferred and transmuted at tremendous scales here mostly through living organisms. However, all life’s sustenance can be traced back to the Sun (and water/soil, but let’s keep the Sun). If the solar system was a mechanism, sunlight and energy from it would be the payoffs, while beings like humans are the mechanism designers. We’ve historically optimized our agriculture-based civilizations to capture as much Sun power as possible so that we can survive. This begs the question, can complex systems even work without some kind of life source?

One could say that in certain protocols, let’s say, AAVE, the “life source” comes from borrowers. When someone takes out a loan, the protocol can earn value by collecting interest or a liquidation event. Here lies a chicken-egg problem where a protocol must have users in order to have enough incentives for the mechanism to truly work. This brings us back to validation, where user research and product-market fit play a key role in determining whether a system fulfills customers’ needs. One thing is for certain: the complexity of the solar system dwarfs the complexity of a smart contract protocol. Efficiency requires innovation, and innovation requires change.

Composability plays a key role in web3 and it makes sense that mutable and immutable systems come with their own pros and cons. However, can we innovate to our maximum capacity with immutable systems? The bridge example in the article divides maintenance and adaptations well. Maintenance is expected and should be done as a part of the system upkeep process. Adaptations are not essential and should be carefully analyzed. Though maintenance typically doesn’t require a change to the protocol (Maker and RAI examples), is it realistic to imagine a system that has no need for adaptations in the future? For the bridge example, what happens when the area’s population spikes and traffic on the bridge gets congested daily? Would this adaptation be necessary? How would we decide whether to add extra lanes to the bridge or simply build another one next to it? In a blockchain protocol, we can visualize this example with a smart contract protocol being built on a layer 1 ecosystem. Of course the smart contract has maintenance, but what happens when gas fees are too high for users to effectively use the protocol? Do we do some type of proxy contract and gas golf to make the experience cheaper? Or do we just deploy the same project on an L2 (building a new bridge)? The payoffs of both options are different due the immutable nature of blockchains. If smart contracts were mutable, gas golfing would be the first option for many engineering teams. Since they aren’t, the L2 option looks better. Plenty of arguments can be made about this stance; what matters is that we know how important it is to choose permanent options wisely. Composability affects people’s decision making, and decision making is what runs the system. Depending on what the system can do, people will make different choices.

Governance is a critical part of system upkeep and adaptation. Governance is probably one of the best forms of validation because the customers & users are actively deciding on what should be done with the system. It is true that the system designer does not have the luxury of knowing what users will want and how people will behave in the protocol. However, a detailed forum post about a proposed upgrade with an active community can produce more user feedback and data than any email survey can. This is where we see how decentralized protocols are superior to traditional internet business models. By making the community participants, shareholders, bug bounty hunters, etc., web3 companies can outcompete several traditional web2 businesses with community. The question remains, even after verification and validation, what happens when the protocol itself just isn’t intuitive for people to use? How do you create something that is a seamless experience, but also an effective value generator? The podcast mentioned behavioral economics and sciences, but I think the answer is more elusive than we think. The entire paradigm of user research could change when governance involvement becomes more mainstream. What happens when more users demand more changes? Will governance push the ecosystem for a more adaptable tech structure?

Democratic processes for protocols will only get more complex. I think a unique approach for system design would be to somehow include governance in the system itself. We don’t want to become so complex that community members won’t participate, but I think we should also want decision making to play hand-in-hand with the protocol itself. Whether it be for giving power users more sway or simply aligning everyone’s incentives, governance is set to be an essential feature for decentralized protocols.

Lots of open ended questions were asked. To sum things up, here are the key questions that can be answered:

  • How do you track value (energy, mass, etc.) in a general dynamical system?
  • In any given system, what necessitates an upgrade vs. simply a new system?
  • As governance becomes more popular, will mutable systems grow in popularity too?

[1] Compositional Game Theory: https://arxiv.org/pdf/1603.04641.pdf

4 Likes