Research Summary: REDSHIFT: Transparent SNARKs from List Polynominal Commitment IOPs

TLDR:

  • Ethereum blockchain transactions are slow and expensive, Zero-Knowledge Proofs (ZPK) with a layer 2 solution could provide efficient verification for these transactions (meaning faster speeds) without compromising security.

  • This paper describes REDSHIFT, a ZKP system based on a new Interactive Oracle Proof (IOP) primitive called list polynomial commitment, and a transparent SNARK algorithm.

  • The system could solve one of the biggest problems with zero-knowledge smart contracts: it does away with application-specific trusted setups and creates a generalized system that allows developers to build on top of REDSHIFT without needing a trusted-parameter generation ceremony, and also support for smart contract.

  • REDSHIFT improved from PLONK change the Kate commitment to FRI commitment to achieve plausibly quantum-resistant zk-SANRKs cryptography.

  • REDSHIFT has garnered considerable interest, as this scheme does not require a trusted setup, nor does it rely on novel mathematical properties. Beyond quantum resistance, this ZPK algorithm may offer better efficiency and lower complexity than alternatives.

  • The following is a simple table to show the similarities and differences between the REDSHIFT and PLONK algorithms, as follows:

    Algorithm Name Arithmetization Succinct QAP Feature
    PLONK Statement → Circuit → QAP Kate commitment General CRS
    REDSHIFT Statement → Circuit → QAP FRI commitment No Trusted Setup

Source

Citation

  • Kattis, A., Panarin, K., & Vlasov, A. (2019). RedShift: Transparent SNARKs from List Polynomial Commitment IOPs. IACR Cryptol. ePrint Arch., 2019, 1400.
  • In addition to the paper reference above, this summary also uses translated information from the following blog post to make the understanding of RedShift more accessible:
  • 小白(2021)。REDSHIFT:不需要可信设置的PLONK。知乎https://zhuanlan.zhihu.com/p/355820474
  • Quadratic Arithmetic Programs: from Zero to Hero | by Vitalik Buterin | Medium

Link

Core Research Question

  • In a PLONK algorithm, how do we ensure that the value provided by the prover is indeed the value of the polynomial at a certain point, rather than a value deliberately chosen to ensure that the verification passes?

Background

  • PLONK is a zero-knowledge-proof scheme. Its name stands for “Permutations over Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge”.
  1. Legacy PLONK schemes require a trusted setup procedure similar to that needed for the SNARKs in Zcash; a “universal and updateable” trusted setup. This means there is one single trusted setup for the whole scheme after which you can use the scheme with any program.
  2. There is a way for multiple parties to participate in the trusted setup such that it is secure as long as at least one of them is honest, and this multi-party procedure is fully sequential: first one person participates, then the second, then the third. The full set of participants does not even need to be known ahead of time. New participants could just add themselves to the end. This makes it easy for the trusted setup to have a large number of participants, making it quite safe in practice.
  • Polynomial commitment: PLONK uses “Kate commitments”, based on a trusted setup and elliptic curve pairings, but you can instead swap them out with other schemes, such as FRI (which would turn PLONK into a kind of STARK) or DARK (based on hidden-order groups). This means the scheme is theoretically compatible with any (achievable) tradeoff between proof size and security assumptions.
  1. The “fancy cryptography” it relies on is one single standardized component, called a “polynomial commitment”

Source: Vitalik Buterin. “Understanding PLONK“

This figure showcases the tradeoffs between proof size and security assumptions, STARKs are feasible post-quantum secure schemes and have a fully trustless setup, but have high computational overhead. PLONK has a minimal trusted setup, so the proof size of PLONK is smaller than SNARKs.

  • Arithmetization: The first step of the zero-knowledge proof algorithm is Arithmetization, which converts the problem to be proved by the prover into the form of a polynomial equation. If the polynomial equation is true, it means that the original problem relationship is true. It is relatively simple to prove whether a polynomial equation relationship is true. According to the Schwartz-Zippel theorem, it can be inferred that two polynomials with the highest order of n have at most n points of intersection.

  • Quadratic Arithmetic Programs: The first step of the zero-knowledge proof algorithm is Arithmetization, which converts the problem to be proved by the prover into the form of a polynomial equation. If the polynomial equation is true, it means that the original problem relationship is true. It is relatively simple to prove whether a polynomial equation relationship is true. According to the Schwartz-Zippel theorem, it can be inferred that two polynomials with the highest order of n have at most n points of intersection.

  • zk-SNARKs Common reference:

  1. Common Reference String (CRS) is both prover and verifier have access to the same string CRS taken from some existing distribution.

  2. Universal Common Reference String (universal CRS) allows a single setup to support all circuits of some bounded size.

  3. Updatable CRS/SRS allows an open and dynamic set of participants can contribute secret randomness to it indefinitely. Although this is still a trusted setup in some sense, it increases confidence in the security of the parameters as only one previous contributor must have destroyed their secret randomness in order for the SRS to be secure.

  • Transparent zk-SNARK Constructions: A new approach to the above problem relies on creating a universal SRS at the preprocessing phase, which can then be used in tandem with any possible predicate (or circuit). This schemes relies on two main ingredients:
  1. encoding the circuit satisfaction problem of the predicate in question as a property of some (lowdegree) polynomial f.
  2. committing to f using a polynomial commitment scheme.
  • Sonic: Universal, updatable, scheme that carries a small set of global parameters. Proof sizes are small (256 bytes) and verifier time is competitive with the fastest zk-SNARKs in the literature. It is especially well-suited to systems where the same zk-SNARK is run by many different services and verified by many different parties. This is exactly the situation for many blockchain systems.

  • Polynomial Commitment Schemes: A polynomial commitment scheme that is both transparent and efficiently computable would yield transparent SNARK constructions that have the potential to satisfy all of the requirements of a fully succinct, transparent and plausibly quantum-resistant zk-SNARK.

Summary

  • This paper is a ZKP algorithm called REDSHIFT that does not require Trusted Setup and does not rely on complex mathematical primitives. At the same time, it is quantum-resistant and offers better efficiency (STARK’s proof is too large) and lower complexity.
  • There are certain questions about the above method, such as "How to ensure that the value provided by the prover is indeed the value of the polynomial at a certain point, rather than a value that you deliberately select to ensure that the verification passes. This value is not calculated by the polynomial?” This problem is guaranteed by the KCA algorithm in the classic SNARK algorithm. For the specific principle, please refer to V God’s zk-snarks series. In the PLONK algorithm, the concept of polynomial commitment (Polynomial Commitment) is introduced. The specific principle can be mentioned in “PLONK—Protocol for Zero-Knowledge Proof Algorithm”.

Method

  • In the PLONK algorithm, in order to make the verifier believe that the polynomial equation is true, the prover randomly selects a point. Then, the prover provides the commitment of various polynomials (including setup poly, constraint ploy, and witness poly), due to the Kate commitment algorithm used. This requires a trusted setup and relies on the discrete logarithm problem. Therefore, as a sub-protocol in the PLONK algorithm, the PLONK algorithm naturally also requires a trusted setup and relies on the discrete logarithm problem.
  • The following will introduce the protocol part of the REDSHIFT algorithm in detail. As mentioned earlier, this algorithm has great similarities with the PLONK algorithm below:
  1. Setup

Fix FRI parameters and degree d for FRI games (that is, the degree of all quotient polynomials). The prover is given an explicit representation of all constraint polynomials and the verifier is given oracle access to them alongside the “distinguishing” point z (which in general is different for each constraint polynomial). The verifier is given P I(x) - the public inputs polynomial and the honest prover is additionally given fL,fR,fO, the witness-wire polynomials. All commitments mean FRI-commitments in this section.

  1. Protocol

  2. Prover chooses masking polynomials

h1(x), h2(x), h3(x) ∈ F<k[x.] where the choice of

k will be described later. Prover defines new witness

polynomials f1(x) = fL(x) + h1(x)Z(x), f2(x) =

fR(x) + h2(x)Z(x), f3(x) = fO(x) + h3(x)Z(x).

  1. Prover sends commitments to polynomials f1, f2, f3 to the verifier.
  2. Verifier chooses random β, γ ∈ F and sends them to the prover.

  1. Verifier sends random a1, . . . , a6 ∈ F to Prover.
  2. Define the following polynomials:

a) F1(x) = L1(x)(P (x) − 1)

b) F2 (x) = L1 (x)(Q(x) − 1)

c) F3(x) = P (x)p′(x) − P (xg)

d) F4(x) = Q(x)q′(x) − Q(xg)

e) F5(x) = Ln(x)(P (xg) − Q(xg))

f) F6(x) = qL(x) · fL(x) + qR(x) · fR(x) + qO(x) · fO(x) + qM(x) · fL(x) · fR(x) + (qC(x) + PI(x))

Later, they show that for honest provers all of {Fi(x)} are identically zero on domain H∗ . This means that all of {Fi(x)} are divisible by Z(x) in the ring F, hence so is their linear combination F(x) = P6 i=1 aiFi(x). Prover computes T(x) = F (x) Z(x) and sends the verifier a commitment to T(x).

Results

  • After fully constructing Redshift and provisioning proofs, the researchers provide a concrete example of how all the parameters of the system can be instantiated and used for performance benchmarks.
  • For realistic benchmarking, the authors instantiate RedShift with q = r · 2^192 + 1, r = 576460752303423505 which is a Proth prime, and use ρ = 1/16.
  • Oracles were instantiated as Merkle trees using the Blake2s hashing algorithm. The setup phase was performed using the approach from Section VIII, where a random point was sampled using Fiat-Shamir heuristics by placing all individual root hashes of oracles to the setup polynomials into the transcript. For comparison, they also implemented the PLONK prover using Kate commitments and used two curves: BN254 and BLS12-381 that are expected to provide ∼80 bits and ∼120 bits of security levels respectively.
  • With these parameters, the authors benchmark Redshift proof sizes across different levels of target security, with a practical emphasis on 80-bit security.

  • Different settings are tested to evaluate Redshift’s proof generation times (in seconds) relative to two competing schemes, namely PLONK BN254 and PLONK BLS12-381:

Discussion and Key Takeaways

  • This work provides a solution to one of the biggest blockers for implementing zero-knowledge smart contracts; the lack of generalizable zk proof systems with sufficient efficiency and recursive composition.
  • This is a major development since previous approaches to zkSNARKs, notably groth16, required application-specific trusted setups. Prior to this work, every new application that used zkSNARKs beyond simple asset transfers was required to perform a trusted setup ceremony in order to be safe.
  • Redshift provides a universal approach to zkSNARKs as it enables any arbitrary program to be converted to probable zk circuits. As a result, developers can build syntax-rich programming languages and full-fledged applications atop Redshift without needing to perform a trusted parameter-generation ceremony.
  • Additionally, Redshift is plausibly post-quantum secure since it relies exclusively on collision-resistant hash functions, which are standardized and well-understood.

Implications and Follow-Ups

  • The set of trade-offs employed by Redshift carries a wide range of implications when it comes to existing approaches to smart contract privacy, execution, and scalability.
  • The lack of feature richness in existing approaches to zkRollups has pushed industry researchers to pursue alternative scalability solutions, such as Optimistic Rollups, which replicate instances of the Ethereum Virtual Machine (EVM) off-chain, with regular timestamps on-chain.
  • If proved successful, the advent of Redshift may lead to considerable design changes in the rollups field as the trade-offs between scalability, practicality and feature-richness continue to be explored.

Applicability

  • Redshift is the zk proof system used in zkSync, an emerging approach to the development and implementation of feature-rich zkRollups.
  • By using Redshift, zkSync can provide 10-minute economic finality of zkRollup transactions, which is considerably quicker than the current 1 to 2-week finality threshold that exists in Optimistic Rollups schemes. The succinctness of the system enables gas costs per transaction to be capped 0.5k gas.
  • Zinc is the first general-purpose programming language built using Redshift. Both its syntax and semantics were borrowed from the Rust programming language. Through the reuse of proof circuits, Zinc smart contracts can be built to preserve the privacy of their users.
5 Likes

@Sean1992076 thank you so much for a really thorough and exciting contribution. I’m hoping you or someone else can help me put this into context a little bit. How does this fit into Ethereum’s plans for the future? I understand there are a couple of major updates on the horizon, where would something like this fit in? And what are the tradeoffs? @Jerry_Ho and @cipherix - I’d love to get your opinions on this thread.

4 Likes

I’m also interested in what @jmcgirk is asking here. In particular, I’m interested in the implication:

Redshift isn’t the only ZKP solution, so I’m curious about its competitive advantages. There is another paper (PlonK paper) where @Jerry_Ho and @cipherix are also starting to discuss a different ZKP solution. It seems like there should be some strong parallelism between these two threads.

Or maybe I’m misunderstanding and these two solutions don’t really have to “battle it out” so much as they could be simultaneously implemented into an Ethereum future?

3 Likes

Or maybe I’m misunderstanding and these two solutions don’t really have to “battle it out” so much as they could be simultaneously implemented into an Ethereum future?

Yeah, they’re definitely complementary in some ways. The biggest advantage of Redshift-based applications, such as zkSync, is that you still get the benefits of Plonk, such as the ability to reuse the zk circuit for different applications, without needing a trusted setup. The zkSync team is pursuing broader applications of this technology by enabling users to develop their own Dapps using the Zinc programming language, whereas other zkRollup solutions are focusing predominantly on the P2P exchange use case.

4 Likes

Continuing the discussion from Research Summary: REDSHIFT: Transparent SNARKs from List Polynominal Commitment IOPs:

One of the RedShift biggest contribution is that RedShift don’t need the trusted setup to compute the prover and verifier parameter at the beginning. This part is a security issue for zk-SNARKs because quantum computer may have the chance it figure it out before the transaction competete.

Ethereum community has suggested that using BLS signature aggregation as a pragmatic medium-term solution to the signature verification bottleneck of sharding and Casper, later replacing it with STARK-based aggregation for quantum security. Pragmatic signature aggregation with BLS

RedShift is improved from PlonK, but it is quantum resistance and support for smart contracts for zk-SNARKs transactions. PlonK is not quantum resistant because of the pre-processed parameters. I think the RedShift is one of the future development of the quantum resistance algorithm. It’s better to have more discussion between zk-STARK and zk-SNARK-based quantum resistance algorithm.

2 Likes

Great observations here!

I’m particularly interested in the quantum resistance argument here. @Larry_Bates did a research summary back in March exploring quantum resistance of Bitcoin. That suggested that some of the concerns about quantum attacks need to be thoroughly examined using realistic attacker parameters (at least that was my big takeaway).

Is there a similar mindset that we should be adopting for Ethereum, Plonk, Redshift, etc?

2 Likes

I have a slightly broad question about engineering shops like the one that created REDSHIFT – how does the business model work? Would they license the technology or is it an open-source project that anyone can use?

3 Likes

It’s a really great question!
I think Post-Quantum Security of the Bitcoin Backbone and Quantum Multi-Solution Bernoulli Search mentions that k-BerSearch problem and Chain-of-PoWs are very different to zk-SANRK and Redshift. The two of the quantum resistance methods are trying to prevent the parameter to be decipher during the computing/generation block.

2 Likes

It’s the key question. RedShift would support for smart contract and quantum resistance, which means it may become the world safest and privacy place to secure your fund. For example, the project can create NFTs platform without reveal your transactions of owners and its open-source.

3 Likes

Was trying to determine the best place to post this…

Combining some elements from:

  1. Research summary: Post-Quantum Security of the Bitcoin Backbone…

  2. Research Summary: PlonK…

  3. Research Summary: REDSHIFT…

@Larry: in #1 you mention that you don’t think that the crypto space has been good at using pre-existing frameworks with respect to possible quantum attacks and you specifically mention NIST. Don’t know if you’ve checked out the NIST Third PQC Standardization Conference from this June and/or the PQ Crypto conference from this July. But with these in mind, and Towards Post-Quantum Blockchain: A Review on Blockchain Cryptography Resistant to Quantum Computing Attacks, I see no mention of Redshift or STARKS. I can see why they may not be mentioned in NIST and PQCrypto, as they don’t explicitly address PQ as it relates to blockchain, but I’m not sure why I don’t see them alluded to in Towards PQ Blockchain. Is this the disconnect between pre-existing frameworks that you’re alluding to?

Please forgive any naivete, but it seems that Redshift and STARKs would both fall into the lattice-based category in this figure from Towards Post-Quantum Blockchain: A Review on Blockchain Cryptography Resistant to Quantum Computing Attacks:

Is the blockchain PQCrypto community solely concentrating on lattice-based solutions simply because they are the most promising and/or broadly what other blockchain PQCrypto potential solutions should I get up to speed on to join the conversation on a deeper level?

Thanks!

3 Likes

Luc, this is exactly the type of disconnect I was referring to. I must admit I was not up to speed on the NIST conferences from this year, but that just goes to my larger point that the speed of information makes it near impossible to keep up with everything. I believe you are spot on that the lattice-based category has a wide range of protocols that would qualify as having a lattice structure that are not currently included under that umbrella. On the other hand, this is where SCRF is attempting to close those gaps by identifying and classifying them so that we have a more comprehensive understanding of the landscape.

4 Likes

Disclaimer: NOT an expert in this field.

tl;dr:
Schemes where its security assumptions are based on random oracle, is considered to be quantum secure, cause it’s behavior is neither periodic, nor related to Grover’s algorithm (quantum search algorithm) or Shor’s algorithm ((large)prime factorization/Discrete Logarithm Problem).

Q1: I see no mention of Redshift or STARKS.

STARK is the whole proving system, which comprised of many
-security assmptions
-cryptographic primitives(building blocks)

When we say quantum secure/quantum proof, we consider the following things:

1.What kind of cryptographic primitives/scheme are we talking about?
For example, consideri we have a digital signature, so:

a. What kind of math tools/algorithms can be used to achive the properties of digital signature? Let us assume RSA.

b. So what does RSA’s security assumption rely on? We say you can multiply two large prime numbers easily, but you can’t factorize them back easily (not in polynomial time).

2.So it’s prime factorization problem.
a. Do we have a quantum algoritms for this kind of problem? Yes we do, called shor’s.
b. So RSA-based digital signature scheme is NOT quantum secure, THEORETICALLY.

3.But how insecure are we? Say, RSA 2048 bit.
a. A resource estimation framework for quantum
attacks against cryptographic functions - recent
developments
estimation told us that we need at least a 6190-8194 qubits in order cause practical threats. Go figure what kind of significant improvements we need to achieve thousands of qubits with trapped-ion quantum computer. (hint, we only have something like 30-ish now, you have tons of engineering and physics problem waiting to be solved, according to my phd friend who’s doing exactly this.)

Key Finding 1: Given the current state of quantum computing and recent rates of progress, it is highly unexpected that a quantum computer that can compromise RSA 2048 or comparable discrete logarithm-based public key cryptosystems will be built within the next decade. National Academies of Sciences, Engineering, and Medicine. Quantum computing: progress and prospects . National Academies Press, 2019.

4.So why do we need post-quantum cryptography?
You don’t want world war III, right?

Make your security parameter (in bits) longer is just temporary.
Find something that is fundamentally incompatible to quantum computer’s calculation paradigm is the goal.

So, the answer to Q1 is: cause they’re on different layers. you’ll mention the underlying assumptions and primitives, instead of the whole system/protocol.

Back to your other questions…

Q2: Can STARK be called lattice-based

Nah, the security assumption is Random Oracle(RO) based.
The nSTARK, defined in section 3.3 here in the original paper. You can also call that it’s based on collision-resistant hash function(CRHF), I suppose.

Be advised that Dan Boneh has something to say (objections) about the above security assumption:
Random oracles in a quantum world

Another relevant article, How to Construct Quantum Random Functions

Or this, Classical vs Quantum Random Oracles.

In short: I don’t really know.
But since it’s hash based so we’re of high probability quantum secured (not DLP, not prime factorization, not quantum search problem). Dan Boneh’s quantum adversaries, quantum accessible oracles, quantum quries…is another thing, I suppose.

Q3: Is the blockchain PQCrypto community solely concentrating on lattice-based solutions simply because they are the most promising

I…or WE, have no idea - for example, Sinica in Taiwan has like…3 or 5? groups who entered the last stage of NIST’s competition, like multivariae in your image.
Warning, traditional chinese. Team Rainbow, interview of Yang Bo Yin.
So until round 3 and the final round concluded…we have no idea that will it be lattice based or not.

As you already knew, the contest focused on two cryptographic primitives, signatures and encryptions.

It’s not about if something’s blockchain or not, the underlying security assumptions and cryptographic primitive’s sharing the same mathmatical foundation.

==
Last paragraph, some random thoughts:
For quantum proof, we mean that the math process/formula/structure does NOT look like something quantum computing (which also follows a certain computation process) can take advantage of.

It’s not about what kind of algoritm’s more promising, it’s more like the current candidates has to satisfy all of the following criteria:

  1. practical, can be implemented(already implemented by the team)
  2. efficient, can be run on…say, embedded systems, with a reasonable performance (size, enceyption, decryption, verification…)
  3. without patent in it
  4. Unlikely to be found of some quantum algorithm that quantum computer can take advantage of.
    (According to Yang, non periodic seems like an important factor, but I have no idea what that really meant.)

Please see if this talk can provide more info to you - it’s an overview of PQC, given by Daniel J Bernstein and Tanja Lange. (but since you’ve participated in the PQC workshop…little I assume.)

References, in no paticular order and not formal format, sorry about that.

https://research.kudelskisecurity.com/2021/08/24/quantum-attack-resource-estimate-using-shors-algorithm-to-break-rsa-vs-dh-dsa-vs-ecc/?utm_source=pocket_mylist

https://research.sinica.edu.tw/postquantum-cryptography-yang-bo-yin/

https://eprint.iacr.org/2018/046.pdf

https://eprint.iacr.org/2010/428.pdf

https://arxiv.org/pdf/quant-ph/9508027

https://arxiv.org/pdf/quant-ph/9605043

https://www.youtube.com/watch?v=HHhdV-gKvaM

https://link.springer.com/content/pdf/10.1007/978-3-642-25385-0_3.pdf

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.400.5108&rep=rep1&type=pdf

https://crypto.stackexchange.com/a/63229

https://eprint.iacr.org/2020/1270.pdf
4 Likes

This is an excellent breakdown @Jerry_Ho!

The security of these systems relies on the assumption that factoring large integers is computationally prohibitive. The scale of integer factorization required in an attack sits on a spectrum, whereby one could measure the extent to which a quantum computer can break these systems. Shor’s algorithm is often used to contextualize that susceptibility.

On Research Pulse #25 there was a really interesting paper that assesses the susceptibility of popular PK crypto systems to quantum attacks: “Assessment of Quantum Threat To Bitcoin and
Derived Cryptocurrencies” https://eprint.iacr.org/2021/967.pdf

1 Like

I’m familiar with Shor and Grover, my issue seems to have been with different layers, aka specific solutions to specific problems aka NIST being about signatures and encryptions vs. STARKs being a whole proving system, so thanks for clarification.

Where did you get the criteria for the current candidates? You have several references listed, but I wasn’t sure which one/s you were referencing.

Would love to see research summaries of some of the papers you reference, especially any that address PQ. Help me get more up to speed on this particular topic! I’m here to learn, share, and exchange ideas. Thanks in advance.

2 Likes

So basically this one

第二場比武要看的則是演算法的性能。一個派別的武功如果到了深山或森林等特殊環境就難以施展的話,那便無法成為武林盟主。同理,NIST 要求晉級的團隊將自家演算法在一般電腦、計算能力較弱的微處理機、硬體晶片上,都要能順利運作,而且效果不能太差。楊柏因解釋:「我們在這階段要做的是,在能保證 Rainbow 安全性的情況下,找出讓它跑得最快的參數。」

Translate it with deepl:

The second match depends on the performance of the algorithm. If a school of martial arts has difficulty in performing in special environments such as deep mountains or forests, it will not be able to become the master of the martial arts. Similarly, NIST requires advanced teams to make their algorithms work smoothly on ordinary computers, microprocessors with less computing power, and hardware chips, and not too badly. What we’re trying to do at this stage is to find the parameters that will make the Rainbow run the fastest while still ensuring its safety," explained Boin Yang.

This one (forgot the exact part)

And this one overall.

Glad to have a interesting conversation and discussion with you, Luc - although I won’t call myself qualified in this field (PQC). : p

2 Likes