Part 4 of our 7-part series with the team at BlockScience features a conversation with Senior Research Scientist Jamsheed Shorish and BlockScience Founder and CEO Michael Zargham. They delve into issues around: Comment
- The definition of “formal methods”
- The need for both validation and verification
- The generalized dynamical systems (GDS) approach
- Composablity and its relation to energy functions
Jamsheed Shorish (JS ) is a computational economist by training and now Senior Research Scientist at BlockScience. He works mainly on incentive mechanism design, control theory, and the ways that entities are created to make desirable properties come into existence. He joined BlockScience three years ago. Comment
Michael Zargham (MZ ) is the Founder, Chief Engineer, and CEO of BlockScience. His background is in electrical and systems engineering. He did his Ph.D. in optimization and control of distributed networks. Today’s subject—mathematical methods applied to system design—addresses some of the same core concepts that arise from applying systems engineering methodologies to large-scale automated systems.
The interview was conducted by Eugene Leventhal, Executive Director of SCRF.
What is the difference between understanding how a system behaves vs. knowing how to get that system to behave in that way?
JS : This is reminiscent of a parable about engineering a system that the engineer then interacts with on a regular basis. Her interactions become the focal point for what she believes she knows about the system, even though she engineered it. Take a mobile phone. Everyone uses one, everyone knows exactly what properties they would like it to have. People have worked very hard to generate those properties. But most people have no idea how to engineer a system that guarantees that a mobile phone will have the properties everyone wants it to have. So the separation between familiarity with an engineered system and being able to figure out how to craft that system to exhibit the specified properties is part of what formal methods in complex systems have to offer.
MZ : Try thinking about things in three views, starting with the thing itself that’s being interacted with, followed by another view that zooms out, and a third view that zooms in. The view that zooms in shows a collection of systems that make up the thing that’s being interacted with.
A good analogy is an analog watch. Hundreds of millions of people have worn one, but zoom in on that watch and very few people understand the numerous sub-systems of springs and gears that make it work as a time-keeping device. Zooming out from the watch gives a view of the system the watch exists in—the person who wears it, the reasons this person needs to keep track of time, the whole social context that person exists in, etc. The object is the sum of a bunch of components that make up a system, but it also does not exist in a vacuum.
When talking about mechanism design, the focus tends to be on the inner workings, the mathematical bits, but many times the actual value of the thing is not really determined by those mechanisms per se, but rather how those mechanisms fit in a broader context that’s more open-ended and less deterministic. By moving through those three views, one gets a better overall sense of the requirements for designing something.
JS : First, systems should be understood as dynamic, which means they change over time. For the last two centuries, differential equations were used to measure how simple systems changed incrementally over time. This led to a field of research called generalized dynamical systems, which looks at the ways that much more complicated structures change over time, with or without inputs from the environment or from other users.
MZ : Using these tools to create mechanisms that exhibit properties at the system level generally involves concepts like “invariance,” and “identities.” Identity is usually used to describe something absolute, such as no double-spends on Bitcoin. Invariance is closer to an energy function that has some variation in it, but the variation is controlled. In web3 there are constant function market makers (CFMMs) that are implemented in smart contracts. They have energy level sets, often referred to as “invariances” or “constant functions” even though they’re not strictly constant; they contain small fees that generate returns for liquidity providers. They work like energy conservation equations: even though they’re not equal they have a well-defined source, and as they enter or leave the system, you’re conserving their flows. You haven’t gained or lost any of it; you’ve kept track of exactly where it’s come from and where it’s gone.
MZ : There are two distinct but related concepts in systems engineering. “Known execution of code” is one of them. They’re called “validation and verification.” The “three views of a system” mentioned earlier come directly from this. Validation is outward-looking, verification is inward-looking.
Most of the time, people who build software use verification as their formal method. They talk about it at great length, using “code correctness provers” to verify that their code produces exactly the results intended. But that type of verification leaves out the whole question of whether that was actually the right code to execute. The complementary concept of validation concerns itself not with whether the code was implemented correctly, but rather whether what was implemented was the correct thing.
In any system, validation and verification should always be considered a pair. The process is:
- Define the requirements
- Define the specifications
- Implement the specifications
- Verify that the implementation matches the specifications
- Validate that the implemented thing fulfills the desired requirements or functions in that system.
That’s a combination of the inward and outward views. When stacking subsystems into complex systems with layerings or even networks of dependencies between components, there’s no way to get a meaningfully composed system if this validation/verification nesting has to be done on every level. It’s the essence of applying formal methods in system engineering.
Much of our own work resides in determining what kinds of “emergent” properties are actually guaranteeable, and under what conditions. Those are things that can also be proved, but they look outward. And so the validation side of the equation is also a bigger part of the design workflow. If one is looking inward, then something must have already been built or specified. If something is already specified, that implies that someone designed it, which implies that they had some intent. If one is only doing verification-style formal methods, that’s not the same as doing formal methods in design; that’s only doing formal methods in testing.
Coming back to generalized dynamical systems (GDS), how does this relate to the composability of system entities and the properties they can be expected to have?
JS : Zargham was talking about how you get that flow from what is being done internally to what is being seen on the outside. That all hinges on the idea that the pieces being looked at individually also fit together correctly. Systems should lend themselves to being examined in detail that way. The sub-systems of an analog watch can be examined individually, but ultimately the desire is for the whole watch to start ticking. The integration should occur at a level where the properties being looked for on the outside are actually being fulfilled.
One of the tools that allow nesting or embedding the GDS approach comes from a part of applied mathematics called category theory. That allows one to see how a whole collection of individual entities are connected to each other. When the relations between collections of objects become evident, when their linkages are obvious, then they are going to “compose” in a consistent fashion. From there it’s possible to say what the properties of a composed system will be.
Mechanism design turns this around a bit to say, “given the properties that the designer wants to have, what types of composability families or classes exist that are feasible, versus the ones it would be preferable to have but turn out not to be operationally possible?”
MZ : Jamsheed emphasized composability, and that’s something that’s pretty basic to BlockScience’s work. Much of the firm’s design work relies on being able to make assumptions about the properties of a system with relatively weak or even borderline non-existent behavioral assumptions. That’s what gives the work strong composability properties, and that’s also what led BlockScience to focus on things like energy functions.
As in the physical world, if energy assumptions are baked into systems in ways that can’t be violated, then it doesn’t matter why a given thing is being done. The laws of conservation of mass and energy can’t be broken just because someone came up with a cool new idea. If concepts analogous to energy functions are baked into mechanism design, then the properties associated with those energy functions are the ones that will stand up to composition, even for unknown future systems. Let’s talk a bit more about why designers can weaken their assumptions about user behavior and, thus, aim to design mechanisms that are a bit more resistant to changes in the behavioral context in which agents make decisions.
Can you also comment on the relationship between system designers to users who interact with their systems?
JS : When designing a system, the designer often does not have the luxury of knowing precisely how an individual is going to interact. Nor does the designer have the luxury of saying that users are going to understand the system well enough to be able to game it and get any money left on the table. So instead, designers look for those properties that are agnostic to the behaviors that generate actions and look at what actions are admissible from one period to the next. That allows the system to retain the flexibility to incorporate any behavior that allows that interface between what people think and what they’re allowed to do. So instead of assuming a behavior, the designer says, “What’s the gateway into the system?” And that can be made as restrictive or as broad as we like.
MZ : The inward-looking vs. outward-looking dichotomy comes into play here, too. Most systems don’t fail on their components; they fail on their interconnections. No matter how carefully engineered a given part may be if it’s not clipped in correctly to the larger thing it is part of, it’s still going to create a mess.
In cryptosystems, people can carefully engineer and even formally verify their chain-code and then embed it in a social system for which it’s just not a good fit, and then because they have no mechanism on which they can iterate on that mechanism, they’re stuck with some “code is law”-style norms that prevent them from evolving the policies that are enshrined in that code. They might be beautifully crafted, but they didn’t fit in the outer system.
And the thing about the outer system is that it will also tend to evolve naturally. BlockScience argues for having rules in smart contracts for how and when which parties can actually call transactions to mutate. And there’s always the possibility of having to declare a v.2 and do a migration of some kind. These kinds of governance mechanisms are both important, and important not to use too liberally. There’s an idea of timescales that comes from the control theory and systems engineering literature that asks not just “what’s going to change?” but “on what timescale is it going to change?” This actually goes back to the discussions of governance. The design, development, deployment, and operations of these automated infrastructures are faced with the question, “what is ongoing maintenance and governance?” One of the critical elements of dynamical systems theory is that timescales matter.
Regarding the life cycle of dynamical systems, how do we know what that life cycle looks like? Is it a consistent pattern of flow? And how do we know where are we in this life cycle?
JS : It’s not a snapshot in time, but something seen across time. If one views the larger system as being composed of subsystems that exist at different points in time, then the same formal representation of a composed system that is being examined at that moment also extends further into time. What the governance system will look like in the future may not be known, but it is known that some particular protocol parameters, for example, should be the ones that are adjustable. Then, no matter what governance process might occur in the future, the choices that can be made won’t destabilize that part of the system. So it’s not just what’s admissible at one point in time, but how does that admissibility change over time? Let’s examine what users are allowed to change in the protocol, and safeguard certain features and ensure that they will be propagated, no matter what we might see in the future.
MZ : There’s a deep tradeoff between making a system mutable or immutable. Going back to the bridge analogy, there will be seasonal changes, there will be differences among people using it, but the bridge should remain constant. All of the effort that goes into it during its lifetime will be care and maintenance, upkeep. So there’s a timescale associated with maintaining the extant properties of the system, and a different timescale associated with that system adapting to its context. Adding a new lane to a highway is an adaptation, not just maintenance. Periodic repaving is purely maintenance. The timescale of maintenance is whatever it needs to be to maintain the designed properties.
Before we assume that this applies only to physical systems, look at something like automatically closing collateralized debt positions in a system like Maker or RAI. That’s clearly maintenance. But using levers like changing the interest rates to make the system achieve desired goals goes beyond simple maintenance.
JS : RAI is a really good example because it’s an extension of the idea of having collateralized debt. But what they’ve been able to do in a novel way is attempt to automate the way in which the redemption of debt for the collateral occurs. They’ve adopted a particular kind of automation, a specific controller that examines the environment around it and decides how to move the price accordingly. The whole idea relies upon the formal understanding of how the response to things coming from outside is even articulated into the system so that what is being seen as an outside impact is actually trustworthy. Individuals who are actually using the system can look away from it, and when they look back, nothing has changed so radically that it has rendered their own investment opportunity either moot or incorrect.
MZ : A really good starting point for understanding game theory in a more general way that’s associated with formal methods in engineering would be to look into differential games. The canonical teaching model is called “the homicidal chauffeur.” It’s a dynamic game so that the actors have admissible actions such as laws of motion, how they’re allowed to move, and they have competing objectives. It flows, it has dynamics. That’s how engineers are typically first introduced to game-theoretic constructs. The idea that games can be represented as dynamical systems rather than as flat snapshots of the world is a really important one in transitioning from toy models that illustrate basic concepts or theories to actually designing and testing things.
Again, formal methods do not just include verification. If one is doing systems engineering in a formal way, there must be both verification and validation. For those interested, check out a series of videos by a guy named Brian Douglas about systems engineering.