Research Summary: PlonK: Permutations over Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge

TLDR:

  • PlonK is a new zk-SNARK construction that utilizes a universally updatable structured reference string (SRS) which stands for “permutation argument over univariate polynomials and multiplicative subgroups”.

  • Unlike popular zk-SNARKs schemes, such as the CRS [GROTH16] implementation used in Zcash and EVM applications, PlonK circuits are not application-specific. That carries usability benefits.

  • While the scheme still relies on a trusted setup to generate random parameters, no additional parameters are required for new applications to be built on top of it. That makes it a more practical system for the verification of smart contracts.

  • PlonK has attained industry interest by striking a good balance between efficiency in validating proofs and the convenience of building applications on top of it. Notably, the Aztec project is building a layer 2 solution based on this technology.

Core Research Question

How can we avoid having to redo trusted setups, while keeping setup time, proof size, prove time, and the verifier time reasonable i.e., succinct, compared to the CRS [GROTH16] model?

Citations

Gabizon, Ariel, Zachary J. Williamson, and Oana Ciobotaru. “PlonK: Permutations over Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge.” IACR Cryptol. ePrint Arch. 2019 (2019): 953.

[GROTH16]
Groth, Jens. “On the size of pairing-based non-interactive arguments.” Annual international conference on the theory and applications of cryptographic techniques. Springer, Berlin, Heidelberg, 2016.

[MBKM19]
Maller, Mary, et al. “Sonic: Zero-knowledge SNARKs from linear-size universal and updatable structured reference strings.” Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2019.

[KZG10]
Kate, Aniket, Gregory M. Zaverucha, and Ian Goldberg. “Constant-size commitments to polynomials and their applications.” International conference on the theory and application of cryptology and information security. Springer, Berlin, Heidelberg, 2010.

[BG12]
Bayer, Stephanie, and Jens Groth. “Efficient zero-knowledge argument for correctness of a shuffle.” Annual International Conference on the Theory and Applications of Cryptographic Techniques. Springer, Berlin, Heidelberg, 2012.

[GMR89]
Goldwasser, Shafi, Silvio Micali, and Charles Rackoff. “The knowledge complexity of interactive proof systems.” SIAM Journal on computing 18.1 (1989): 186-208.

[GKMMM18]
Groth, Jens, et al. “Updatable and universal common reference strings with applications to zk-SNARKs.” Annual International Cryptology Conference. Springer, Cham, 2018.

Vitalik Buterin. “Understanding PLONK“

Background

[GMR89] is the first interactive proving system utilizing public coin-tossing protocol for interactive proofs. In a similar sense, [GROTH16] is the first fully succinct non-interactive zero-knowledge (NIZK) proof protocol based on CRS. To solve the issue that CRS has to be pre-processed (called a trusted ceremony) and can’t be reused if the circuit needs to be updated, [GKMMM18] is the first NIZK protocol with SRS, but it comes with a caveat: the calculation of Gaussian elimination is expensive. [MBKM19] Sonic is the first fully succinct NIZK with SRS, but it has limited efficiency compared to PlonK. It is not until PlonK that we have a fully succinct and efficient SRS-based NIZK protocol, where the annoying trusted setup (that has to be redone for every circuit design update) and toxic waste issues have been solved.

Summary

  • Introduction

    • The authors claim to have better proving time than [MBKM19], thanks to the protocol of permutation argument over univariate evaluations on a multiplicative subgroup (based on [BG12]), rather than over coefficients of a bivariate polynomial in [MBKM19].

    • PlonK provides two variants, one with 9(n+a) combined degree of polynomials, one with 11(n+a). The former one has a larger proof size but reduced prover computation required for about 10% compared to the latter one. n stands for multiplication gates, and a stands for addition gates in the circuit.

    • Sonic, Groth16, Auroralight, Fractal, Marlin are thoroughly compared as their performance can’t be compared directly, because their circuit design (additive gates/multiplicative gates)=>constraint system is different.

  • Preliminaries

    • The AGM [FKL18] model is used to evaluate the security assumption against algebraic adversaries (soundness check) to make sure PlonK makes them fool the game with only probability of negl(λ).
  • Kate commitment [KZG10] is improved in PlonK as a batched form, in order to make parallel processing (opening) of commitments for each evaluation point possible. To quote the authors: “Ultimately, in the “grand product argument”, this reduces to checking relations between coefficients of polynomials at “neighbouring monomials”.

  • The authors define a low-degree polynomial, which is a slightly simplified version from [KZG10] and [MBKM19], in order to achieve the desired properties for polynomial commitments in PlonK.

    • The authors prove that the simplified low-degree polynomial commitment achieved knowledge soundness under the AGM model.

    • The authors describe the pre-processing (trusted setup) of ranged polynomials.

  • PlonK’s heart: permutation argument with univariate polynomials on a multiplicative subgroup, is described here.

    • PlonK’s batched version of permutation check on polynomial commitments is described here.
  • A new form of constraint system is described here. Vitalik describes it as having two kinds of constraint: copy constraints (will be “encoded” with permutation polynomials), and additive/multiplicative gate constraints (“encoded” with wire polynomials). Please refer to [Understanding PlonK] for more details.

  • Main protocol of the proving system.

  • A wrap-up of all of the above: pre-processing, proving, calculating challenges beforehand, verifying via opening commitments, checking the opening proofs, and doing evaluations on points/commitments.

Method

Please refer to section 8.4 in the original paper for detailed formulae. What follows is a qualitative description of the steps provided.

The process for the respective roles is ordered below.

  • Trusted setup (pre-processing):
    An S-ranged polynomial protocol is defined so that a verifier can “ask” the protocol if some polynomial equations hold on a certain range of input values. This way the information can be “hidden” inside. The Kate commitment scheme and opening of such commitments also play a role in the zero-knowledge property. Please refer to section 4 in the original paper.

  • Prover, to compute:

    1. Circuit=>(permutation/wire/quotient/)polynomials (and challenges to such polynomials)
    2. Evaluation points (and challenges of such points), opening evaluations (and challenges of such openings)
    3. Linearisation polynomials=>evaluation of such polynomials
    4. Finally, a multipoint evaluation challenge.
  • Verifier, to compute:

    1. Validate that the wire/permutation/quotient/opening commitments are indeed in range.
    2. Validate that the linearisation commitments and opening evaluation commitments are indeed in a valid range.
    3. Validate that the public inputs are indeed in range.
    4. Compute the challenges as what the prover did.
    5. Compute zero polynomial evaluation.
    6. Compute Lagrange polynomial evaluation.
    7. Compute public input polynomial evaluation.
    8. Compute quotient polynomial evaluation.
    9. Compute batched polynomial commitment, in two steps.
    10. Compute group-encoded batch evaluation.
    11. Batch validate all the above evaluations and compare them to the proof provided.

Results

PlonK is the second fully succinct version of SNARK with universal updatable SRS.

The performance comparison is shown below:


The authors claim that the choice of univariate polynomials over multiplicative subgroups increases both the prover and the verifier efficiency, under different circuit assumptions.

In common circuits, the ratio between additive gates and multiplicative gates is 2:1. Under this circuit assumption, this means PlonK is 2.25 times worse than [GROTH16], and 10 times better than [MBKM19], for prover’s work.

Please be advised that additive circuits are often considered calculation-free, as in the CRS/[GROTH16] setup, resulting in “unlimited” fan-in circuits. However, PlonK prefers a constant number (two) fan-in circuit, for performance reasons.

The following chart shows the benchmark via ttps://github.com/AztecProtocol/barretenberg/ .

For performance comparison between the not-so-obvious setup, such as randomized sumcheck approach and Fractal/Marlin, the authors emphasized the approach of the constraint system design, quote: “we focus on constant fan-in circuits rather than R1CS/unlimited addition fan-in; thus our linear constraints are just wiring constraints that can be reduced to a permutation check”.

That makes Marlin a little bit tricky to compare directly, so the author compares the average (assuming the same value of n - numbers of multiplication gates; PlonK outperformed by 2x on prover group operations and proof size), the extreme, and the low bound - PlonK is superior to Marlin in all cases, unless in the most extreme condition where the R1CS constraints are “fully dense” in a sense that , where the circuit design is filled with many large numbers of fan-ins rather than constant numbers of fan-ins where PlonK prefers.

Discussion and Key Takeaways

  • CRS schemes generally used in the industry are bad for maintainability as new parameters must be generated in a trusted parameter generation ceremony for new applications. PlonK iterates upon this system through a new SRS scheme.

  • Under SRS, the trusted setup produces parameters that can be generalized under a certain bounded circuit size, and the security assumption is stronger as only one of the participants is required to be honest to achieve negl(λ) probability for adversaries to win the game (convince the verifier with false proof without knowing witness).

  • PlonK carries an interesting set of trade-offs. It features increased performance compared to [MBKM19] and [GKMMM18]. While its performance is a little bit worse than CRS-based [GROTH16] schemes, its flexibility makes it better suited for layer 2 applications.

Implications and Follow-ups

  • Instead of needing long-winded full-powered zero-knowledge primitives to achieve soundness, we now have commitment schemes—a great tool in the zero-knowledge protocol toolbox.

  • We can open the commitments, evaluate the points and compare them to the argument and the proof provided by provers, while still “hiding” the nature of the computation inside the commitment to achieve computational soundness.

  • This enables the creation of private applications that provide proofs of computational soundness under the assumption that there was at least one honest participant in the initial parameter generation ceremony.

Vitalik Buterin provides some additional clarity:

“There are other types of polynomial commitments coming out too. A new scheme called DARK (“Diophantine arguments of knowledge”) uses “hidden order groups” such as class groups to implement another kind of polynomial commitment. Hidden order groups are unique because they allow you to compress arbitrarily large numbers into group elements, even numbers much larger than the size of the group element, in a way that can’t be “spoofed”; constructions from VDFs to accumulators to range proofs to polynomial commitments can be built on top of this. Another option is to use bulletproofs, using regular elliptic curve groups at the cost of the proof taking much longer to verify. Because polynomial commitments are much simpler than full-on zero-knowledge proof schemes, we can expect more such schemes to get created in the future.”

Applicability

  • PlonK is perfect for zk-rollups on Ethereum to solve throughput issues on the mainnet. The authors of PlonK are currently pursuing an implementation under the Aztec project.

  • Aztec developed zk.money, a privacy shielding solution on Ethereum, based on zk-zkrollup, which is further powered by PlonK. This scheme is intended to enable the creation of private smart contracts off-chain with fast confirmation times on-chain.

  • The Aztec team is also working on Ultra-PlonK, which precalculates some logics/calculations as lookup gates, and makes them available in regular PlonK - the precalculated gates can be “SNARK unfriendly”, this is exactly the distinctive feature of Ultra-PLONK.

  • If successful, Ultra-PLONK could be more succinct than most approaches to zk-SNARKs and as it is expected to only consume 3k gas per transaction, whereas an average ERC20 transfer consumes 45k.

  • The team is also applying this scheme to SHPLONK - a unique polynomial commitment scheme that can be used in PlonK, or other zero-knowledge protocols.

  • Turbo-PLONK programs can make the usage of arbitrary arithmetic custom gates possible. This enabled breaking further away from the efficiency limitations of traditional R1CS, for example in the performance of elliptic curve operations.

  • Apart from the official PlonK implementation written in C++, there are two Rust implementations available. One from dusk-network can be found here. For another implementation by Fluidex, you can even write circuits in a circom-compatible fashion.

5 Likes

Excellent summary @Jerry_Ho!

As you pointed out, generalizable SNARKs are more practical for developers wanting to use this technology since previous iterations required the generation of trusted parameters for every new application.

At the same time, the use of a single parameter does make the initial setup substantially more critical. If, for example, attackers were able to side-channel that process, they could potentially forge proofs for all applications that make use of it.

Do you know what Aztec used to set up their system? Did they use Zcash’s sapling parameters to bootstrap, as other projects have?

4 Likes

Hi,
We used our own ignition ceremony

https://ignition.aztecprotocol.com/

6 Likes

Thank you for the link @Ariel_Gabizon and welcome to SCRF!

I recently came across this dashboard https://ignition.aztecprotocol.com/ – what a fun way to showcase the interactions and geo-distribution of the MPC participants, well done.

Given your work on identifying the trade-offs of big parameter generation ceremonies (e.g. Cheon attacks), what was the approach in determining the appropriate size of Ignition?

4 Likes

So, I wasn’t at Aztec when the ceremony happened; joined just a bit after…and I think we weren’t too aware of Cheon back then, so that didn’t effect the size choice. I think it was just about getting large enough circuit size, and at the same time having the participation time be reasonable (and doable from a laptop and not just a big multicore machine)

5 Likes

It’s really great summary to understand PlonK. Thanks to @Jerry_Ho
I know that both PlonK and zk-SNATKs need a trusted setup for the initialization, but zk-STARK doesn’t need this. I’m curious about the citation:

mention that " PLONK uses “Kate commitments”, based on a trusted setup and elliptic curve pairings, but you can instead swap it out with other schemes, such as FRI. Could I describe it means that PlonK uses FRI commitments instead of Kate commitments then name it as RedShift?

2 Likes

I found the AGM [FKL18] citation seems to be missed, for those who are interested in it:
Fuchsbauer, Georg, Eike Kiltz, and Julian Loss. “The algebraic group model and its applications.” Annual International Cryptology Conference. Springer, Cham, 2018.

I have the same question as @Sean1992076 mentioned, I also wonder if we can describe the major change from Sonic to Plonk is replacing the protocol of permutation argument, which was inspired by BCC+16[2] (mentioned in PlonK 1.1[1]) in Sonic and replaced by BG12[3] in PlonK? (It would be wonderful if @Ariel_Gabizon can answer it.)

I saw @Sean1992076 has published a Research Summary for RedShift[4] and have some discussion on the ZK solutions. It seems the RedShift is more ideal than PlonK in most conditions, but I’m curious about that how’s the difference on verifier works? I’m not familiar with FRI but I think the verification is also changed due to replacement from Kate commitments to FRI commitments.

  1. Gabizon, Ariel, Zachary J. Williamson, and Oana Ciobotaru. “PlonK: Permutations over Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge.” IACR Cryptol. ePrint Arch. 2019 (2019): 953. 4
  2. J. Bootle, A. Cerulli, P. Chaidos, J. Groth, and C. Petit. Efficient zero-
    knowledge arguments for arithmetic circuits in the discrete log setting. pages
    327–357, 2016
  3. S. Bayer and J. Groth. Efficient zero-knowledge argument for correctness
    of a shuffle. In Advances in Cryptology - EUROCRYPT 2012 - 31st Annual
    International Conference on the Theory and Applications of Cryptographic
    Techniques, Cambridge, UK, April 15-19, 2012. Proceedings, pages 263–280, 2012.
  4. Research Summary: REDSHIFT: Transparent SNARKs from List Polynominal Commitment IOPs
6 Likes

Hi,
To answer last two messages:

Yes this biggest change from Sonic is the permutation argument.
And yes redshift is a version of plonk with FRI commitments.

4 Likes

Interesting new development related to PlonK. Earlier today, Trail of Bits disclosed a vulnerability affecting some implementations of Girault, Bulletproofs, and PlonK.

According to their blog post:

Let’s look at PlonK as an example. If you inspect the details of the protocol, you will see that the authors do not explicitly explain how to compute the Fiat-Shamir transformation (i.e., hash these values). Instead, the authors instruct users to compute the value by hashing the “transcript.” They do describe what they mean by “transcript” in another location, but never explicitly. A few implementations of the proof system got this computation wrong simply because of the ambiguity around the term “transcript.”

It appears a few projects misimplemented Fiat-Shamir in their schemas, which is not unheard of. As the post acknowledged, such errors often stem from developers & researchers being unaware that a new vulnerability has emerged and that their schemas need to be updated.

Truly highlights the importance of open discussions around such systems not only so that they can be better understood, but also so that there’s increased awareness of when new vulnerabilities emerge.

4 Likes