Research Summary: A survey of autonomic computing—degrees, models, and applications

TLDR:

• Autonomic computing describes the idea of designing large, complex systems to have certain characteristics that enable the systems to be largely self-managed
• How can building networks with self-management principles be beneficial to achieving desired performance outcomes? Applicability questions to think about:

  • What potential benefits exist from applying autonomic computing principles to a future Chainlink metalayer architecture?
  • Can existing Chainlink network functions and characteristics be generalized into an autonomic computing self-management framework?

Core Research Question

  • What are the characteristics of an autonomic computing system? How have these characteristics been implemented in existing computing systems?

Citation

Background

  • Although the concepts in this paper have been around for decades now, the overview of autonomous computing that it provides may be useful in future blockchain-related systems design
  • “Computing systems have reached a level of complexity where the human effort required to get the systems up and running and keeping them operational is getting out of hand. Autonomic computing seeks to improve computing systems with a similar aim of decreasing human involvement.”
  • This paper provides an overview of IBM’s 2001 autonomic computing self-management principles and provides case studies on how these principles have been implemented into various systems.

Summary

  • IBM’s autonomic computing idea described autonomic systems which have four main properties:
    • Self-configuration : An autonomic system configures itself according to high-level goals.
    • Self-optimization : An autonomic system optimizes its use of resources, which can include proactive change to adapt to service or environment challenges.
    • Self-healing : An autonomic system detects and diagnoses problems, and can be reactive to failures or early signs of failure.
    • Self-protection : An autonomic system protects itself from malicious attacks and end-users. The system autonomously tunes itself to achieve security, data privacy, and data protection.
  • The MAPE-K autonomic loop describes how a managing element within an autonomic system interacts with a wider subsection of the system:
    MAPE-K
  • “In the MAPE-K autonomic loop, the managed element represents any software or hardware resource that is given autonomic behaviour by coupling it with an autonomic manager. Thus, the managed element can for example be a web server or database, a specific software component in an application (e.g. the query optimizer in a database), the operating system, a cluster of machines in a grid environment, a stack of hard drives, a wired or wireless network, a CPU, a printer, etc.”
  • autonomic manager: monitors the managed element and executes changes
    • “The autonomic manager is a software component that ideally can be configured by human administrators using high-level goals and uses the monitored data from sensors and internal knowledge of the system to plan and execute, based on these high-level goals, the low-level actions that are necessary to achieve these goals. The internal knowledge of the system is often an architectural model of the managed element”
  • The MAPE-K loop components can be structured in a hierarchical fashion with many levels of abstraction.

The paper then describes what it calls levels of autonomicity, and the characteristics of each:

  • Support: We describe this as work that focuses on one particular aspect or component of an architecture to help improve the performance of the complete architecture using autonomicity. For example, focus on resource management or network support or administration tools that contain self-management properties.

  • Core: This is where the self-management function is driving the core application itself. That is, if the application’s focus is to deliver multi-media data over a network and the work describes an end-to-end solution including network management and display, audio, etc., then we describe the self-management as core.

  • Autonomous: There is again where a full end-to-end solution is produced, but typically the solution concerns more emergent intelligence and agent-based technologies where the system self-adapts to the environment to overcome challenges that can produce failure, but it is not measuring its own performance and adapting how it carries out its task to better fit some sort of performative goals.

  • Autonomic: At this level, work concerns a full architecture and describes the work in terms of that architecture. However, the interest is in higher-level human based goals, e.g. business goals or SLAs are taken into account. This level involves a system that reflects its own performance and adapts itself accordingly.

Applicability

When considering the potential future scale of the proposed Chainlink metalayer, it may be useful to be aware of design principles of autonomous systems. It may be useful to generalize both the administration of the metalayer and existing Chainlink network features into a complete autonomic computing framework. This framework may be beneficial when considering design parameters to achieve desired network characteristics and resiliency.

(this may fit better in the Mechanism Design section, please move the thread if so)

10 Likes

Thanks so much for posting this @rncd and welcome to SCRF! Could you tell us a little bit more about how this could connect with Chainlink (or other chains?)

1 Like

Autonomic computing principles have been built into certain Chainlink services already. Properties like proactive gas management during Ethereum congestion and volatility have self-protection characteristics.

However, the area where I see the greatest potential application of these principles to Chainlink is in a future layer within the metalayer - a layer which handles the administrative and clerical tasks of the network as a whole. I will call this layer the overseer.

To see what I mean, let’s start with the end vision and work backwards. Let’s make some assumptions:

  1. The Chainlink network has continued expanding and handles a very high work volume in a number of different service verticals.

  2. Metacontracts have replaced job requests as the network work-initiator and they live entirely within the Chainlink network.

  3. Metacontracts are capable of relaying work-initiations for all types of work on the Chainlink network.

  4. To respond to new work initiations from metacontracts, DONs and work structures are continuously and dynamically composed and dissolved.

Let’s use these assumptions to build the case for the existence of the overseer, and then I will discuss how autonomic computing principles could inform the design of the overseer.

First, work is either transmitted to a Chainlink metacontract by some sort of relay message from another base-layer contract/legacy system/other object, or a metacontract is directly created on the Chainlink network. Thousands/millions of these are created per day. If the metacontract itself does not house complete knowledge about the metalayer, how can the metacontract alone efficiently distribute the work? How does the metacontract assemble a DON that meets the job specifications included in the metacontract?

It seems that drawing functional boundaries around these systems is necessary. The metacontract contains data, data processing requests, details about requirements of who can process the data, output formats, output instructions, and more, but it doesn’t know how to assemble the entity that will actually do the work. The metacontract has no information about the state of the network it exists in. Rather than further complicate the metacontract with bloat that may or may not be relevant to the work specified within, it seems logical to forward things like node requirements to another entity, one whose function is network management. One of this entity’s many network management responsibilities is DON creation. This network management entity is what I am calling the overseer.

Here is a hastily-made MS paint image showing the relationship of the overseer to the metalayer, metacontracts and nodes.

overseerstructure

A metacontract is created on the left. The metacontract relays node (red letter) requirements to the overseer. The overseer creates a DON according to specification, and relays the DON ‘address’ to the metacontract. The metacontract then uses the created DON to compute the contract. Upon completion of the metacontract (if it has an end), then the metacontract relays the completion of the work to the overseer, and the overseer then terminates the DON and the metacontract (red arrow).

This is fundamentally a management process. The overseer could be responsible for tracking network health, node workload, node participation in metacontracts, external environment information, such as other blockchain health, market volatility, etc. and using this information to make optimized decisions when managing work for metacontracts.

A lot of information similar to this is already being tracked, such as node performance, reputation, and response latency. The overseer would have access to this information and would use it to compose work infrastructure that meets metacontract specifications.

Here is a simple replacement of the general MAPE-K autonomic loop with this hypothetical overseer-metalayer relationship:

MAPE2

In terms of autonomic computing principles, the overseer could be configured to meet certain requirements or high-level goals/targets, such as network average response latency. It can use external data and metalayer status information to make self-configuration and self-protection optimizations to achieve high-level targets.

5 Likes

I am starting to see metalayers and metacontracts by proxy as the inevitable solution for what is the “eventual and yet unknown problem” sure to emerge in each protocol. It seems to be a simple, yet elegant solution to establishing flexibility while also creating zones in which data can attain the property of immutability by consensus.

What is the first use case real-world application for this framework that would demonstrate its utility in your opinion?

4 Likes

I almost forgot:

Thank you for this wonderful post! It is going to be very useful to reference in the future.

4 Likes

Many real-world software systems have adopted these principles. NASA as an organization is notable for designing software this way - many of their spacecraft systems have to be self-managed during communications blackouts and highly adaptable and responsive to their environment. I think a lot of these concepts have been rolled into the field of machine learning, but it’s nice to look at the foundational ideas to see if they can be applied in a different direction, such as network management. I have a hunch that some complex cybersecurity monitoring systems are built with these characteristics, but I don’t know much about that field.

4 Likes

The reason I asked if there would be any real-world applications is because a LOT of the current “automation” is just recursive boolean statements. I don’t know if that qualifies as “autonomous” in the way that these frameworks propose, which is why looking at “autonomous vehicles” or “self-constructing machines based on simulations”, the level of “autonomy” is 100% dependent upon the capacity for the programmer to create algorithms that do not get stuck in loops or have logical inconsistencies. Have you seen any actual code that you would consider to be “fully autonomous”? I have talked to the people that built the robot that passed the Japanese entrance exam, and also people who have created algorithms that simulate the best machine for an environment before sending the schematics to a 3D printer to be built, and even THAT level of “automation” isn’t really automated YET.

Have you seen automation that you believe TRULY qualifies as “autonomous” in person?

I’ve seen Sophia in person, and it’s 100% not autonomous.

This may just be me having worked in ML and having my standards be ridiculously and unnecessarily high, but I also haven’t seen everything that exists :slight_smile:

4 Likes

Great points - I see what you’re saying. I should first clarify that my software knowledge and ability to translate ideas into actual code (and back) is somewhat limited (a few semesters of undergraduate algorithms and data structures). My knowledge and experience is mostly in governance and system/process risk. I wouldn’t be able to look at complex code and evaluate its properties very easily.

I agree with you that the word “autonomous” has been adopted to mostly describe software that reacts to its environment and produces an output according to an internal logic. This is simply a reactive behavior (a certain detected input determines an output) and would fall on the low end of the scale of autonomy, though I would still classify it as autonomous behavior.

The way I see it is: when you move up the scale of autonomy, the managed objects within the system rise in complexity. Low-level autonomous systems may simply take an input, apply a simple (or very complex) algorithm to that input, to produce a range of outputs. This autonomy level only manages the input/output logic with an algorithm - the crudest autonomy.

A slightly more autonomous system may manage algorithm selection, or other “mid-level manager” software functions. It may take environmental input and then decide which algorithm to apply to the inputs. This is the first autonomy level that I would call self-managing - choosing an optimal process based on internal or external factors.

An advanced autonomous system (what you are calling truly autonomous, I think) would be one that dynamically manages its large-scale anatomy. I am not aware of any software like this. This type of software might take environmental or internal data and choose a different governance protocol, or reorganize itself based on performance, or write or delete it’s own code to improve program function. This is the cool stuff and you are right to have very high standards.

Even very advanced AI/ML software like AlphaGo seems to be low-level autonomous software that is iteratively improving the algorithm it is applying to inputs - which is a relatively low-level object to manage, even if the algorithm is staggering in complexity. This is probably what gives it the colloquial appearance that it is highly autonomous.

For blockchain and blockchain-adjacent software, I think mid-level autonomy is probably the most applicable. For this century, anyway…

I think sentient software is probably indistinguishable from truly autonomous software.

5 Likes

This is also where I believe computer scientists and by proxy journalists covering them tend to overstep and conflate properties that shouldn’t be conflated.

As someone who has multiple degrees, one of them being in psychology and having worked in an fMRI lab, I have unique insight into some of the things that people in computer science are asserting and can counter them with factual evidence.

Currently, “consciousness” or the notion of a “geolocated source of sentience” is still being debated. There is an agreement that “consciousness” exists, but it’s not agreed in the psychological and brain sciences from where that consciousness originates. There are many theories, and the dominant ones have the consciousness existing locally in the brain, but the advances in robotics have come from using a “global consciousness” model in which all information does not have to get relayed back to a central location (brain) to be processed.


I will give an example to ground it:

muscle memory.

On the surface, muscle memory appears to be nothing more than habituated quick-twitch responses that have been practiced enough to have the fibers respond in synchronicity without having to process everything on the conscious level.

At first, this appears to be innocuous and just looks like an extremely fast relay.

The issue is, there is a possibility that the neurons are processing information on site, and never actually sending a transmission back to the brain in situations in which an instant response is necessary, i.e. sudden threat

When news sources put out the second hand reporting, they make it seem like “muscle memory exists on the DNA level” based on results, but that is only one possibility to explain the results:

Study proves ‘muscle memory’ exists at a DNA level - Keele University

Looking at a study directly, and not the second-hand sources, the researchers doing a study on the subject do not go so far in their claims and ground their theory in what is known as a “myonuclear domain”.

It will be a while before we are in a position where we are scientifically going to agree on what a “sentient” machine looks like; which is a cool place to be, but I just wanted to put that out there.

To bring it full circle:

A soda machine has “autonomous” actions at a certain level, but that doesn’t make it “sentient”.


(Fully autonomous soda maker for example)

Sentience does not equate to “free will”.

There are a lot of constructs that computer scientists (and by proxy journalists covering them) have accidentally conflated that deserve to be parsed out.

That paper I linked articulates the difference between “sentience” and “intentional systems” to parse out what has been conflated as “sentience” and “autonomous self-modifying machines” in that an “autonomous self-modifying machine” may APPEAR sentient, but in fact is most likely just an “intentional system” in which no sentience exists.

I’ve seen Sophia and Wilson up close…and neither is REMOTELY close to being “Sentient” as we all likely think of it based on our lived experiences of it, but both are VERY close to being fully autonomous.

P.S.

Here is a link to a work I co-authored that got published in a neuroimaging journal. You can’t just say “I used to look at brains with a super-powered magnet and imager” on the Internet without providing proof :grin:
The timecourse of activation within the cortical network associated with visual imagery - PubMed (nih.gov)

7 Likes

The issue is, there is a possibility that the neurons are processing information on site, and never actually sending a transmission back to the brain in situations in which an instant response is necessary, i.e. sudden threat

Slightly off-topic, but I wonder: Can this statement be the direct counterargument to the famous Libet experiment?

The Libet experiment
tl;dw: The hand moves before consciousness. Does that means your hands move independently without the interference of free will?

I’m just so fascinated that, as the frontier of our knowledge expands(muscle memory), we tend to have totally different interpretation about the same experiment.

4 Likes

Again, the Libet experiment assumes “localized consciousness” which may not be accurate.

I will frame it this way:

All day you think about your existence from the perspective of your eyes and face. If you are sitting at a desk for hours on end, eventually your body may start to hurt. Does that mean your consciousness extended to your body, or that your consciousness received signals from your body, or that the consciousness is present in the ENTIRE body?

All three have different implications, where the Libet experiment only used two of those premises as potential options, when clearly those are not the only explanations.

Reflexes by definition do not need “consciousness” to be executed, as reflexes are the autonomous responses of the body.

This is why it’s not that easy to parse “where” consciousness exists.

In fact, consciousness can’t OVERCOME autonomic responses.

If a doctor hits your knee with a hammer, no amount of consciousness can override the autonomic response.

You cannot “will” your heart to stop beating.

Consciousness has limits, and there is a line where autonomous reactions don’t depend on consciousness and can even contradict what the conscious mind intends.

Stage fright for example is a scenario in which “consciousness” or in some cases “the subconscious” overrides muscle memory to make muscle memory seemingly inaccessible.

I believe we are ALSO dancing around another subject, which is the threshold of perception or what would appear to be the latency between “real-time” and “conscious perception”.

There is a wide range of perception thresholds that have been calculated using low-light scenarios to test the time it takes for the pupil to dilate with the introduction of a photon.

The issue becomes, the detection threshold is not static within or between individuals. What can appear to be “instantaneous” for one person can actually be “delayed reaction” for another. Considering the libet experiment was first introduced before high speed cameras were accessible, I have a hard time accepting the results as being what they are trying to express as the cause. This is also why the psychology field as generally discarded the Libet experiment due to flawed methodology.

Libet’s experiment: Questioning the validity of measuring the urge to move - ScienceDirect

the Abstract:

“The time of subjectively registered urge to move (W) constituted the central point of most Libet-style experiments. It is therefore crucial to verify the W validity. Our experiment was based on the assumption that the W time is inferred, rather than introspectively perceived. We used the rotating spot method to gather the W reports together with the reports of the subjective timing of actual movement (M). The subjects were assigned the tasks in two different orders. When measured as first in the respective session, no significant difference between W and M values was found, which suggests that uninformed subjects tend to confuse W for M reports. Moreover, we found that W values measured after the M task were significantly earlier than W values measured before M. This phenomenon suggests that the apparent difference between W and M values is in fact caused by the subjects’ previous experience with M measurements.”

In short, the Libet experiment’s methodology was TERRIBLY flawed.

4 Likes

There is some really eclectic knowledge exchange happening here!

I’m about it, but also want to get a little back to the discussion @rncd and @Larry_Bates were having. I like the points here about the importance of autonomic computing and how that might interact with and make chains more efficient, etc., but I’m curious if this doesn’t end up with some scaling barriers.

Wouldn’t all of these interactions laid out in the hastily-made but incredibly helpful MS paint image @rncd created (reposted below) create transaction costs as well? I might be off base, but does this create an autonomic computing complexity barrier? Certainly not theoretically, but practically?

You know what, the consciousness discussion with @Jerry_Ho informs this question too. We’re not necessarily talking about chain consciousness, but complexity for sure. There’s a need to have some of this just “happen”, but it takes a lot of transactions in order for that to happen.

overseerstructure

3 Likes

This framing was all to bring the discussion full circle to elucidate the logic behind not having machine learning transactions taking place on a network in which “learning” will have a fee associated with each new “piece of information learned”.

You’re absolutely right that there will be a cost incurred for processing at some point in this, and in that framing it becomes clear that the only way to scale the most complex parts of this system would be to ensure that they would be the least costly to execute. Some might argue that the testnets serve this purpose, but that doesn’t equate to real-world data and is still technically a sandbox environment.

An overseer within a metalayer may not incur “gas” costs or “transaction fees” directly depending on what layer it interacts with a given blockchain, but it will incur electricity costs at the least.

The entire reason I wanted to go down that road of complexity for the consciousness discussion is to bring it back to the notion that complex computing will be extra-expensive if every transaction incurs a direct fee, and thus the most complex calculations will need to have a directly proportional value return in a paradigm in which extractable value dictates the focus of computing power.

MEV and ML for the sake of ML cannot theoretically coexist in the wild.

3 Likes

This is a really interesting post, and it brings to mind a topic I’ve been researching recently.
@Sean1992076 and I recently published a paper in IEEE ICBC 2021 titled “Testbed Design and Performance Analysis for Multi-Layer Blockchains” - the summary of which I will likely be posting on this forum soon, but I’ll give a quick intro to it here. Let’s say some company comes to us and asks us to design a multi-layer blockchain system for them. Seeing as there are always certain design tradeoffs to consider, that company will then need to specify which features they want to optimize for: scalability, security, throughput, etc. In our paper, we define a “parameter space” of a blockchain as the union of the set of hardware and software parameters on which the blockchain runs: block size, block time, network topology, CPU model, number of cores, etc. After collecting data and benchmarking, we manually select which chain has the best performance and suits that company’s application scenario the best.
Now interestingly enough, one of our original design ideas was to have parameter auto-tuning settings, but we never finished adding this functionality. With that being said, research in autonomic computing could help us to not only allow for auto-tuning functionalities, but also to go far above and beyond that by allowing for the system to be self-aware. Theoretically, based on the eight conditions defined by IBM, we could construct such a system that allows for self-configuration and self-optimization to the multilayer blockchain.

  1. Define resources, capabilities, and limitations
    Resources: define the set of hardware
    Capabilities: the impact and intended purpose of the software
    Limitations: understanding design tradeoffs
  2. Configuration and reconfiguration in a dynamic environment
    The dynamic environment in this case may involve situations of high network congestion or malfunctioning hardware. Note the additional constraint of the user’s application scenario.
  3. Optimize and update system autonomously for best performance
    This is precisely what our paper does, only we perform it manually.
    4 and 5. The system must be able to repair itself, solve encountered problems, and prevent/protect against attacks.
    Our paper defines different application scenarios in which crash-fault tolerance would be more suitable than Byzantine fault tolerance, and vice versa.
  4. Adapt to dynamic environments and establish communication protocols.
    We use permissioned
  5. Open standards; cannot be proprietary environment
    The experiments we perform are specifically on permissioned blockchains, and so this requirement would only be partially fulfilled.
  6. Anticipate system resource demands, maintain transparency to users
    This is a requirement that smart contracts fulfill perfectly.
8 Likes