Research Summary: Arbitrum: Fast, Scalable, Private Smart Contracts

TLDR

This paper (and whitepaper) describes an Optimistic rollup, an L2 scaling solution for smart contracts that recently launched on the Ethereum blockchain. The protocol uses mechanism design to incentivize an agreement off-chain on the VM execution. Malicious behavior is penalized by a loss of deposit after a challenge is resolved in a multi-round game. Any honest party can raise a dispute to correct potentially fraudulent activity in an efficient manner and advance the VM state on-chain.

Core Research Question

How do we scale blockchains with smart contracts using an off-chain solution?

Citation

[PAPER] Arbitrum: Scalable, private smart contracts. Harry Kalodner, Steven Goldfeder, Xiaoqi Chen, S. Matthew Weinberg, and Edward W. Felten, Princeton University

[WHITEPAPER] Arbitrum: Fast, Scalable, Private Smart Contracts. OFFCHAIN LABS

Links

https://www.usenix.org/conference/usenixsecurity18/presentation/kalodner

Arbitrum’s Team Ask Me Anything on Reddit

Background

Layer-2: collective term for solutions designed to help scale the transactional throughput of applications by handling transactions off the underlying blockchain (layer 1).

consensus: protocol for users (nodes) to come to an agreement on the shared history of transactions

Verifier: It describes the underlying consensus use. Arbitrum is consensus agnostic and uses the term to express that they can work with permissionless or premissionned chains.

verifiers: The term generically refers to the underlying consensus mechanism participants. For example, in the Bitcoin protocol, Bitcoin miners are the verifiers.

validators/managers: A manager [PAPER] of a VM is a party that monitors the progress of a particular VM and ensures the VM’s correct behavior. When a VM is created, the transaction that creates the VM specifies a set of managers for the VM. A manager is identified by its public key. [WHITEPAPER] uses the term validators.

VM: Taking the blockchain as an operating system, Arbitrum uses VM to refer to smart contracts/Dapps.

The Arbitrum compiler [WHITEPAPER] takes a group of contracts written in Solidity, compiles and links them together into a single executable file that can run on the Arbitrum Virtual Machine (AVM) architecture.

Ethbridge: Dapp running on Ethereum. Its job is to serve as a bridge between Ethereum and Arbitrum. Anyone in Ethereum-land can call the EthBridge to interact with Arbitrum-land, for example to launch an Arbitrum VM, to make a call to a contract running on Arbitrum, or to send ether or any token to an Arbitrum VM. The EthBridge’s other important job is to referee disputes between validators.

AnyTrust guarantee: Arbitrum guarantees correct execution as long as any one validator of a Dapp acts honestly. Most other approaches require majority-honest or two-thirds-honest assumptions, or else require moving the entire state of a contract to the main chain in case of a dispute.

bisection protocol: the dispute resolution protocol of Arbitrum between two validators, a challenger and an asserter, with the L1 as referee.

Summary

  • Why scaling is hard

    • Verifier dilemma: The problem is the high cost of verifying VM execution. Either verifiers end up accepting the transactions without verifying them because they expect everyone will do the same or others will do the hard work in their stead. Or they honestly verify the incoming transactions which make them vulnerable to time-consuming computation that can slow them in the block producing race.
    • participation game: A mechanism design approach that aims to induce a limited but sufficient number of parties to verify each VM’s execution. These systems face the Participation Dilemma, of how to prevent Sybil attacks in which a single verifier claims to be multiple verifiers, and in doing so can drive other verifiers out of the system. Prior works to Arbitrum fail to resolve this problem since smart contract verification is a repeated game and in those, there are numerous other equilibria that do not project onto Nash equilibria of their one-shot variants. For example, the case where only one player participates is a Nash-equilibrium.
    • Arbitrum: An L2 protocol in the Optimistic Rollup family to improve scaling on an L1 blockchain like Ethereum.
    • In Arbitrum, parties can implement a smart contract by instantiating a Virtual Machine (VM) that encodes the rules of a contract. The creator of a VM designates a set of validators for the VM.
    • The Arbitrum protocol provides an AnyTrust guarantee: Any honest validator can force the VM to behave according to the VM’s code. Parties that are interested in the VM’s outcome can serve as validators themselves or appoint someone they trust to manage the VM on their behalf. Parties can send messages and currency to a VM, and a VM can send messages and currency to other VMs or other parties. VMs may take actions based on the messages they receive. The Verifier also tracks the hash of the VM’s inbox. Arbitrum also allows contracts to execute privately, publishing only contract states.
    • Relying on validators, rather than requiring every verifier (here, miners) to emulate every VM’s execution, allows a VM’s validators to advance the VM’s state at a much lower cost to the verifiers. Verifiers track only the hash of the VM’s state, rather than the full state. Arbitrum creates incentives for the validators to agree on what the VM will do off-chain. Any state change that is endorsed by all of the validators will be accepted by the verifiers.
    • If, contrary to incentives, two validators disagree about what the VM will do, the verifiers employ a bisection protocol to narrow the disagreement down to the execution of a single instruction, and then one manager submits a simple proof of that one-instruction execution which the verifiers can check very efficiently. The liar must then pay a substantial financial penalty to the verifiers, which serves to deter disagreements.
  • The Arbitrum VM has been designed to make checking one-step proofs fast and simple. In particular, the VM design guarantees that the space to represent a one-step proof and the time to generate and verify such a proof are bounded by small constants, independent of the size and contents of the program’s code and data.

  • Assumptions: It is assumed that users will only pay attention to a VM if they agree that the VM was initialized correctly and have some stake in its correct execution. By Arbitrum’s Anyrust assumption, parties should only rely on the correct behavior of a VM if they trust at least one of the VM’s managers. One way to have a validator you trust is to serve as a manager yourself.

  • One key assumption that Arbitrum makes is that a validator will be able to send a challenge or response to the Verifier within the specified time window. In a blockchain setting, this means the ability to get a transaction included in the blockchain within that time. While critical, this assumption is standard in cryptocurrencies, and risk can be mitigated by extending the challenge interval (which is a configurable parameter of each VM).

  • Two factors help alleviate denial of service (DoS) attacks against honest validators. First, if a DoS attacker cannot be certain of preventing an honest validator from submitting a challenge, the risk of incurring a penalty may still be enough to deter a false assertion. Second, because each manager is only identified by a public key, a validator can use replication to improve its availability, including the use of “undercover” replicas whose existence or location is not known to the attacker in advance. Lastly, a motivated malicious manager can indefinitely stall a VM by continuously challenging all assertions about its behavior. The attacker will lose at least half of every deposit, and each such loss will delay the progress of the VM only for the time required to run the bisection. We assume that the creators of a VM will set the deposit amount for the VM to be large enough to deter this attack.

Method

  • VM lifecycle:

  • The Arbitrum protocol recognizes two kinds of actors: (public) keys and VMs. An actor is deemed to have taken an action if it is signed by the corresponding private key.
  • An Arbitrum VM is created using a special transaction, which specifies the initial state hash of the VM, a list of validators for the VM, and parameters such as the length of the challenge period. The state hash represents a cryptographic commitment to the VM’s state.
  • Once a VM has been created, validators can take action to cause that VM’s state to change. The Arbitrum protocol provides an AnyTrust guarantee: any one honest validator can force a VM’s state change to be consistent with the VM’s code and state, that is, to be a valid execution according to the AVM Specification.
  • An assertion states that if certain preconditions hold, the VM’s state will change in a certain way. An assertion about a VM is said to be eligible if the assertion’s preconditions hold, the VM is not in a halted state, and the assertion does not spend more funds than the VM owns. The assertion contains the hash of the VM’s new state and a set of actions taken by the VM, such as sending messages or currency. Note that for each VM, the Verifier tracks the hashed state of that VM, along with the amount of currency held by the VM, and a hash of its inbox.

Unanimous assertions are signed by all validators of that VM. If a unanimous assertion is eligible, it is immediately accepted by the Verifier as the new state of the VM.

Disputable assertions are signed by only a single validator, and that validator attaches a currency deposit to the assertion. If a disputable assertion is eligible, the assertion is published by the Verifier as pending. If a timeout period passes without any other validator challenging the pending assertion, the assertion is accepted by the Verifier and the asserter gets their deposit back. If another validator challenges the pending assertion, the challenger puts down a deposit, and the two validators engage in the bisection protocol, which determines which of them is lying. The liar will lose their deposit.

  • Bisection protocol: If a validator challenges an assertion, the challenger must put their deposit in escrow. The asserter and the challenger engage in a game, via a public protocol, to determine who is correct. The party who wins the game will recover their deposit, and will take half of the losing party’s. The other half of the loser’s deposit will go to the Verifier, as compensation for the work required to referee the game.

The game is played in alternating steps. After a challenge is registered, the asserter is given a pre-specified time interval to bisect its previous assertion. If the previous assertion involves N steps of execution in the VM, then the two new assertions must involve \lfloor N/2 \rfloor and \lceil N/2\rceil steps, respectively, and the two assertions must combine to be equivalent to the previous assertion. If no valid bisection is offered within the time limit, the challenger wins the game. After a bisection is offered, the challenger must challenge one of the two new assertions, within a pre-specified time interval. The two players alternate moves. At each step, a player must move within a specified time interval, or lose the game.

After a logarithmic number of bisections, the challenger will challenge an assertion that covers a single step of execution. At this point the asserter must offer a one-step proof, which establishes that in the asserted initial state, and assuming the preconditions, executing a single instruction in the VM will reach the asserted final state and take the asserted publicly visible actions, if any. This one-step proof is verified by the Verifier. See Figures.

Description of the Bisection Protocol from [PAPER]

  • The VM uses a stack-based architecture. Its state is organized hierarchically. This allows a hash of a VM’s state to be computed in Merkle Tree fashion, and to be updated incrementally. The VM architecture ensures that instructions can only modify items near the root of the state tree and that each node of the state tree has a maximum degree of eight.
  • The state of a VM contains the following elements:
    • an instruction stack, which encodes the current program counter and instructions;
    • a data stack of values;
    • a call stack, used to store the return information for procedure calls;
    • a static constant, which is immutable; and
    • a single mutable register which holds one value.

  • Instruction Stack: Arbitrum maintains an “instruction stack” which holds the instructions in the remainder of the program. To advance, the Arbitrum VM pops the instruction stack to get the next instruction to execute, halting if that stack is empty. Jump and procedure call instructions change the instruction stack, with procedure call storing the old instruction stack (pushing a copy of the instruction stack onto the call stack) so that it can be restored on procedure return. This approach allows a one-step proof to use constant space and allows verification of the current instruction and the next instruction stack value in constant time.
  • A VM interacts with other parties by sending and receiving messages. A message consists of a value, an amount of currency, and the identity of the sender and receiver.
  • Preconditions, Assertions, and One-Step Proofs: Each assertion is accompanied by a set of preconditions consisting of a hash of the VM’s state before the asserted execution, a hash of the VM’s inbox contents, an optional lower bound on the VM’s currency balance, and optional lower and upper bounds on the time (measured in block height). An assertion will be ignored as ineligible unless all of its preconditions hold. Still, parties may choose to store an ineligible assertion in the hope that it becomes eligible later.
  • In addition to preconditions, an assertion contains the following components: the hash of the machine state after the execution, the number of instructions executed, and the sequence of messages emitted by the VM.
  • A one-step proof, which is a proof of correctness, assuming a set of preconditions, for an assertion covering the execution of a single instruction. It must provide enough information, beyond the preconditions, to enable the Verifier to emulate the single instruction that will be executed.
  • Because the state of the VM is organized as a Merkle Tree, whose root hash is given as a precondition, the proof only needs to expand out enough of the initial state Merkle tree to enable the Verifier to emulate the execution of the single instruction. It verifies that the result matches the claimed assertion.
  • Messages are sent to a VM by users (with their keys) by putting a special message delivery transaction on the blockchain; and by other VMs using the send instruction. A message logically has four fields: data (an AVM value), a non-negative amount of currency and the identities of the sender and receiver of the message.
  • A VM’s validators track the state of its inbox, but the Verifier need only track the inbox’s hash, because that is all that will be needed to verify a one-step proof of the VM receiving the inbox contents.

Results

Scalability. This is the key feature of Arbitrum. Validators can execute a VM indefinitely, paying only negligible transaction fees that are small and independent of the complexity of the code they are running. If participants follow incentives, all assertions should be unanimous and disputes should never occur,but even if a dispute does occur, the Verifier can efficiently resolve it at little cost to honest parties (but substantial cost to a dishonest party).

AnyTrust Guarantee. Arbitrum guarantees correct execution as long as any one validator of a dapp acts honestly—even if all of the other validators collude to (try to) cheat.

Privacy. Arbitrum’s model is well-suited for private smart contracts. Absent a dispute, no internal state of a VM is revealed to the Verifier. Further, disputes should not occur if all parties execute the protocol according to their incentives. Even in the case of a dispute, the Verifier is only given information about a single step of the machine’s execution but the vast majority of the machine’s state remains opaque to the Verifier.

Interoperability. Arbitrum is interoperable with Ethereum. A Dapp written in Solidity can be compiled using the open source Arbitrum compiler to generate Arbitrum-ready code. Users can also transfer Ether or any other Ethereum-based token back and forth between Ethereum and Arbitrum.

Flexibility. Unanimous assertions provide a great deal of flexibility as validators can choose to reset a machine to any state that they wish and take any actions that they want – even if they are invalid by the machine’s code. This requires unanimous agreement by the managers, so if any one manager is honest, this will only be done when the result is one that an honest validator would accept–such as winding down a VM that has gotten into a bad state due to a software bug.

Discussion and Key Takeaways

Arbitrum is a L2 protocol that aims to fix many issues on L1, specifically on Ethereum. As it is today, Ethereum cannot scale because requiring miners to emulate every smart contract is expensive, and this work must be duplicated by every miner. It has no privacy which then has to come as an overlay. Solutions based on Zero Knowledge could help but they are expensive to run so the throughput would be limited to a few transactions per block. Finally there is an inflexibility of Ethereum-style smart contracts since deviation from the code is not possible. This is possible in Arbitrum as long as all of the VM’s honest managers agree to it.

By making disputes relatively cheap to resolve, and imposing a substantial penalty on the loser, Arbitrum strongly disincentivizes attempts to cheat. Even if a dispute occurs this doesn’t impose a huge on-chain impact. In the common case, validators will agree and progress will occur off-chain, with only occasional touches to the main chain.

To demonstrate Arbitrum’s efficiency, the authors measured the throughput of an Arbitrum VM which performs iterative SHA-256 hashing. They evaluated its performance on an early 2013 Apple MacBook Pro, 2.7GHz Intel Core i7. They were able to attain 970,000 hashes per second. Comparatively, using native code on the same machine, they made 1,700,000 hashes per second, while Ethereum is only capable of processing approximately 1600 hashes per second due to its gas limit.

Arbitrum’s performance advantage extends further. The Verifier is capable of handling large

numbers of VMs simultaneously. Instantiating many copies of the Iterated Hashing VM, the authors measured that the Verifier node running on the test machine was capable of processing over 5000 disputable assertions per second. This brings the total possible network throughput up to over 4 billion hashes per second, compared to 1600 for

Ethereum.

Implications and Follow-ups

Extensions

  • Zero Knowledge: Arbitrum provides privacy as long as none of the validators reveal the offchain data. If there is a challenge, they still have to reveal a small portion of the state (such as during the bisection protocol) which can contain sensitive information. The authors propose using a zero knowledge protocol to implement the one step proof. While zero-knowledge proofs could in theory be used to prove the correctness of the entire state transition (and not just a single step), doing this for complex computations was not feasible with the tools available at that time.
  • Reading the base chain: As described, Arbitrum VMs do not have the ability to directly read the underlying blockchain. This could be easily solved by extending the VM instruction set to allow a VM to read the blockchain directly. To do so, the authors would create a canonical encoding of a block as an Arbitrum tuple, with one field of that tuple containing the tuple representing the previous block in the blockchain. This would allow a VM that had the tuple for the current block to read earlier blocks. The precondition of an assertion would thus specify a recent block height, and the VM would have a special instruction that pushes the associated block tuple to the stack. In order to be able to verify a one-step proof of this instruction, the Verifier just needs to keep track of the Arbitrum tuple hash of each block.
  • Arbitrum VM enables the use of different Virtual Machines other than the Ethereum Virtual Machine. This will help support bigger smart contracts and also multi-language supports like C/C++, Python, Go and Rust. This in turn might help with the adoption of ZK-rollups where developers have to write applications in special purpose and new languages.

Applicability

After 3 years of development, Arbitrum was finally released on Ethereum’s mainnet.

10 Likes

@jyezie Thanks for posting a fascinating summary.

I have a few general questions about Arbitrum’s design and the company’s experience since the launch. As the summary’s author, do you feel qualified to respond to any of these?

  1. In the limited time since Arbitrum’s launch, what have been the company’s major learnings?
  2. Have there been more or fewer fraud proofs than you expected to generate in that time?
  3. Have there been any issues/learnings with the Any-Trust guarantee that requires only one honest validator?
  4. Unlike Optimism, Arbitrum supports all EVM languages, but uses a transaction processing method that can mean longer wait-times for a fraud proof. How are those differentiators working out in practice? Has Arbitrum’s fundamental design approach held up well in the real-world experience so far?
  5. What are the implications for security guarantees, and also for users/dev adoption?
6 Likes

@jyezie – in the article you mention that Arbitrum has been around for three months now and has been successfully implemented. (cc @Eric (if you’re around)). Could you tell us about the process of implementation and how it differed from expectations? Three months in, how is Arbitrum working with ETH?

2 Likes

Good questions @jmcgirk @rlombreglia. The main issue with Arbitrum is that the design is pretty to undertand and the papers are clear. But there are many questions left as to how that does look since it is implemented. It would be awesome to get those answers from Aribitrum’s team because in their own words, they released a mainnet beta.
For instance let’s take a look at the security guarantees. For the VM validators, how is the onboarding process working? The stake needs to be high enough to deter malicious participation, like challenging correct states to halt the VM. But it should enable enough validators to join a bring more resilience through decentralization.
For the end user, one question is how do they apply the “AnyTrust” guarantee. If I am a VM manager, that’s easy I only need to trust myself. But the chances that for example Uniswap select me for this task or that I have enough stake to take part are rather low. Thus how does a user raise the alarm to one (honest) validator?

Arbitrum brings some nice short and long term advantages to the Ethereum ecosystem. In the short term, Arbitrum, like other OR enables developers to simply port their existing Solidity code to their L2. This is especially important to profit from DeFi composabilty and a usual point of comparison with Zero Knowledge Rollups that do not support, “yet”, solidity smart contracts.In the long term, and this is where OR & ZKR come together, we can expect developers to be able to use other programming languages (C++, Rust, Go, …) to develop smart contracts via more powerful virtual machines.

2 Likes

Very detailed summary! The mechanics introduced Arbitrum reminds me of truebit(https://truebit.io/). Seems to me they are similar in multiple aspects including both of them being L2 scaling solutions, and the way they deal with malicious behaviors. A head to head comparison between this two should be interesting.

2 Likes

Hey @jyezie

This protocol seems like a non-trivial breakthrough in blockchain research. It’s a pleasure to get to learn about this meaningful information. Thank you for selecting this paper and publishing the summary.

I have a few questions, and along the way, I would like to share a version of my very generalized understanding. This should put us on the same page. (And make sure I didn’t misunderstand the author. Cheers.)

–

The first reason why Ethereum smart contracts are hard to scale is that they are subject to Ethereum’s hashing power limit. Arbitrum is a protocol that moves complicated calculations off-chain, making versatile contracts possible.

To verify the data and determine the state, Arbitrum relies on third parties referred to as managers. The protocol will make sure it rewards the manager that submits the right verification. So as long as one out of the handful of managers turns out to be trustworthy, the outcome would be reliable.

This notion is referred to as the Any-Trust Guarantee: If at least one manager of a VM is honest, the VM will execute correctly according to its code. (VM is short for Virtual Machine)

My question here -

Smart contracts are for decentralized, trust-less purposes. What’s the point of having a protocol that only works when there is a trusted third party?

I understand that players don’t need to place their trust in any of the managers, but just one of them. Yet how do you know which one of them to trust?

I’d also like to ask about an unclear detail: can a “challenge” be raised by someone that has already participated in the assertion?

Thanks

2 Likes

Before I dive directly into questions specifically about Arbitrum, I want to ensure that I have the context, much like @Twan did in her comment.

Although there are other L1 options, the majority of users are still on Ethereum. Focusing solely on scaling on Ethereum, it can be done at L1 with options like moving from a Proof of Work (PoW) to a Proof of Stake (PoS) consensus mechanism or sharding. That said, these solutions take time to implement. Hence, several L2 solutions to assist with scaling have emerged.

Among L2 solutions: sidechains, state channels, plasma, rollups (zk and optimistic), and validium (similar to zkrollup, but data availability is off-chain); Arbitrum falls into the optimistic rollup family. And, some of the criteria that could be used to evaluate L2 solutions include: security, performance, usability, and how data is moved, generated, stored between/within L1 and L2.

Narrowing in on L2 rollup scaling solutions, how does Arbitrum compare to StarkNet and Optimism with respect to the evaluation criteria above?

The summary mentions in the Implications and Follow-ups section, that a zk protocol be used to implement the one step proof. Is this perhaps to address what might be considered a weakness in optimistic roll-ups, namely that the state transition is assumed correct until an invalid transaction is challenged, unlike zk-rollups where every state transition generates a SNARK which is verified by the rollup contract on the mainchain? This question is asked with the understanding that all solutions must consider and try to balance trade-offs.

2 Likes

Thanks for your prompt and detailed reply, @jyezie! It’s true that it’s still early days for Arbitrum right now—possibly too early for answers to some of my questions. However, we still hope to have a member of the Arbitrum team participate in this discussion at some point.

Meanwhile, your own thoughts on porting Solidity code, the eventual use of other languages, and the appearance of more powerful VMs all suggest a strong future for Rollups of both families.

1 Like

It’s a very great summary of Arbitrum. In the layer 2 solutions of on-chain store the data and off-chain compute the transaction. I’m very curious about the flexibility, if one of the honest node is suddenly offline, this means the other honest node can choose to reset a machine to any state with other validators’ agreement?

1 Like

They have a whole section on participation games like Truebit.

In the context of this paper, think of participating as “verifying a computation.” It costs something to verify the computation, but once you’ve verified it, you can claim to have verified it from any number of additional Sybils for free, and these Sybils are indistinguishable from “real” verifiers. The goal would then be to design a participation game (i.e. a reward function f (·)) such that in equilibrium, no player has any incentive to Sybil, and a desired number of players participate, so that the apparent number of verifiers equals the actual number of separate players who were verifiers.

In summary, Arbitrum’s authors show that Truebit is designed with a One-shot Sybil Proof setting. While smart contract verification is more of a repeated game. This leads to an equilibrium where only one party, sybilling as many parties, is doing the verification.

But someone more familiar with Truebit could make a more thorough comparison.

2 Likes

The first reason why Ethereum smart contracts are hard to scale is that they are subject to Ethereum’s hashing power limit.

Broadly speaking, I would say Ethereum (and other blockchains) have a limit on their scale because each node has to reproduce the work done by one. In a way, we have so many miners but they are competing instead of cooperating to include the transactions in the blockchain.

Smart contracts are for decentralized, trust-less purposes. What’s the point of having a protocol that only works when there is a trusted third party?

Good question. There seems to be a contradiction. Remember that what we call trustless or without intermediaries on blockchain actually means that have a network of intermediaries, miners/validators, that do the work. We hope that the majority are honest so that they won’t censor your transactions and keep it in the ledger. I can make the case that both trust, in Ethereum or Arbitrum, are the same. The key difference is that you assume there will be more different and honest parties on Ethereum than on Arbitrum.

We can clearly say we increase scalability at the expense of decentralization here.

2 Likes

Thanks for replying boldly and clearly.

What are your thoughts on the privacy improvement? Arbitrum’s privacy comes when data is only revealed to the manager, but what happens when the manager leaks the data?
I see it’s better compared to automatically revealing all information on the public ledgers. But is there any way we can punish misbehavior?

1 Like

The paper is an explicit introduction to Arbitrium that is easy to comprehend for newcomers to the blockchain world. I learned about the distinctive attributes of arbitrium compared to Ethereum, while not limiting the usability of Abitrium for Ethereum developers.

The four main advantages of scalability, privacy, AnyTrust guarantee, and interoperability with Ethereum touched on all areas of concern especially from a novice point of view where risk tolerance will be relatively lower.

The most interesting feature of the Arbitrium to me is the Validators, I am however concerned about them being chosen by the dapp creator. Makes me skeptical about how honest they would be should any party decide to cheat but that fear was struck out by the dispute resolution protocol.

It is reassuring to see that the creators of Arbitrium completely understood the gaps in other smart contracts scalability solutions and designed the Arbitrium to fill that gap.

@jyezie Thanks for sharing this so much great summary! It’s a wonderful way to learn L2 solutions for non-technical background learners.

Also, thanks @Twan for your enlightening questions.

I am just thinking about some trust mechanisms that are designed to count on smaller and smaller amounts of honest nodes. Would it make sense that the essence of decentralization may not be necessarily linked to the amount or proportion of trusted nodes, while it lies in the implementation of machine rules? When the trust mechanism is designed to rely on anyone who works align with the rules and its results are verified by a set of rules, so it actually depends on the machine rules rather than that party. If we say this does not diminish the quality of decentralization, even if it only passes through by one party because it still works by the machine rules regardless of any party’s will, does it make sense to you?

3 Likes

A small comment on L1 scalability: by itself, moving from POW to POS doesn’t bring more scalability. You need to play on many part of the blockchain and consensus to push better performance. You can skim some of the different approaches in this list of Notable works in scaling.

Now onto your questions. You are right, when we compare L2, we have to look at many more parameters than what you have with L1 solutions. Personally, I would have a hard time comparing those solutions in depth because on one hand I haven’t read that much about the other protocols, beyond some podcasts debate, and on the other I focus on the papers from Arbitrum, which lack the implementation details. So far the best comparison resources I found are:

  1. A L2 overview by Patrick McCorry
  2. L2BEAT – The state of the layer two ecosystem

Broadly speaking, I feel like we are trying to escape L1 consensus, yet we are realizing that we need some consensus on the data (Note: I am still wrapping my head around the Data Availability problem). For instance, Arbitrum requires all the validators to agree while on DYDX (that use Starkware ZK Rollup), there is one entity generating the proofs. OR have the advantage that they can onboard dev and by extension much more easily: they keep L1 Composability, despite their optimistic nature. ZKR seem much powerful in the long term but have issue onboarding projects.

What does this mean for the end user then? As you pointed out, there will be many trade-offs. I am not sure how users will understand the different guarantees they have and all the means, if any, they can use to detect/stop a fraud. This brings back the question of decentralization and the trustless nature of those protocols, as also mention in @Ajibike_Jimoh’s and @Astrid_CH s comments.

5 Likes

@jyezie thanks for your response. I plan on digging into the different L2 options a bit more and will use the two articles you reference as well as the Notable Works in Blockchain Scaling as my jumping off points, so many thanks! Down the rabbit hole…

1 Like

Arbitrum is an optimistic roll-up solution to scaling Ethereum just like starkNet, but anyone who has actually engaged and used arbitrum for transactions would encounter some current challenges including, high transaction fees to bridge assets, 7 days transaction finality window, and lack of dapps listed on the platform this has contributed to a declining transaction rate on the platform. What can be done to mitigate these challenges and improve the current state of the project?

4 Likes

Arbitrum is one of the methods for achieving fast transaction speed via a layer-2 technology. But what if layer-1 could do this all by itself?

I just read a brief but fascinating article on Cardano’s forthcoming “Input Endorsers,” which are a layer-1 innovation that splits each block into two parts: the consensus block and the transaction block.

This separation, Cardano claims, will allow its proof-of-stake Ouroboros protocol to stream transactions constantly, rather than waiting for consensus.

Input Endorsers aren’t here yet; they’re scheduled for future release. But if this is accurate, it could be a big advantage for Cardano, and conceivably knock over the applecart for the many layer-2 schemes in existence (or in development) whose only raison d’etre is to speed up the inherently slow transaction rate of layer-1.

The only other article on Input Endorsers I’ve been able to find is here. Does anyone have thoughts on this, or more information on “Input Endorsers”?

1 Like

@jyezie I am impressed with this fine piece

My preferred method for “scaling” blockchain is to employ a vast number of them, one for each bit of information that can be logically segregated. For example, one chain may be used to authenticate the user, another to validate the data, a third for timing, etc.

Blockchain: A technology so amazing that you have to work around it (thus negating its main touted advantage) in order to scale to any usable application.

Blockchains are by nature sluggish. really slowly This is due to the fact that each time we wish to add records, we must first find the right answer to an extremely challenging problem. (Keep in mind: no additions or removals.) A blockchain will expand quickly and function poorly if we store a lot of data in it, much like we would in a relational database. By dividing the data into multiple parts, we can answer the challenging issues in parallel on smaller chains. The business logic connected to each “small” chain may have zero-knowledge in relation to the others, which is even more significant. Both multiprocessing and compartmentalization are advantages. It improves data security.

“Off-chain solution?” " I’m not sure what you mean here unless you’re referring to business logic, which is intrinsically off-chain. The chains solely serve to store data. Smart contracts can be implemented (or so I’ve been told) on or off a chain. My ideal would be to execute the job off-chain (even Ricardian) and save / chain crucial points along the way that exhibit good transaction behavior.

1 Like

Nice work @jyezie
Blockchain scalability, which mostly refers to transaction speed, is undoubtedly the bitcoin industry’s holy grail and bottleneck. Cryptocurrency transactions currently take longer than regular payment methods. However, several theories on how to best get around this obstacle are being worked on by the crypto groups. The blockchain network’s capacity is determined by scalability, which also affects the number of network nodes, the amount of transactions the network can handle, how quickly the network can handle transactions, and other factors. The term “scalability” can be a bit ambiguous because Bitcoin’s blockchain can grow as more users join the network. The PoW algorithm will automatically alter the level of difficulty, and the network may support any quantity of nodes.
The adage “Bitcoin is not scalable” focuses mostly on its throughput, or the fact that it can only process seven transactions per second (tps), which is insufficient for practical use (when compared to VISA, which can reportedly reach 24,000 tps). Another problem is that individuals won’t wait an hour to validate that their purchase of a plate of jollof is legitimate.
Blockchain scalability is a complex problem, and several academic and business initiatives are being made to address the scalability trilemma. There are, in my opinion, off-chain solution can be useful for scaling the blockchain with smart contracts.
In order to enable quicker transactions, off-chain solutions add a second layer to the primary blockchain network. In order to save space and ease network congestion, transactions are “off-loaded” from the primary blockchain and placed on the secondary protocols, which are built on top of it. This off-chain solution involves two parts:
a sidechain: this is an additional blockchain connected to the primary blockchain (mainchain). A two-way peg can be used to trade the assets on the mainchain and sidechains (ref) at specified prices. By relocating certain programs on sidechains, sidechains can be utilized to unload off the mainchain. If inter-blockchain communication grows more effective, sidechain is one of the promising answers to the scalability challenge.

Payment channel: An off-chain network that exists in addition to the main blockchain is known as a payment channel. The goal is to create a communication path between the parties involved in a transaction.Since every transaction in the channel happens off-chain, there is no need for universal consensus. As a result, these transactions are carried out promptly, at a fraction of the cost, and with lightning-fast speed using a smart contract.
in conclusion, scaling techniques including the one in this work will probably play a significant part in Ethereum’s quest for scalability. It remains to be seen, however, if this will be sufficient to ensure scalability over the long run.

2 Likes