# Approaching Constant: The Race for The Fastest Zero-Knowledge Proof System.

Humans have an insatiable desire for speed. From supersonic jets to rockets that breach Earth's atmosphere, we constantly strive to go faster. This obsession with velocity extends to the realm of zero-knowledge proof (ZKP) systems, where we crave for our proofs to run as swiftly as the programs they verify or protect. This universal yearning has sparked a flurry of research and benchmarking efforts, each incrementally contributing to faster prover times. Real-time verifiable and privacy preserving computation will be revolutionary for our interconnected world.

Plonky, RISC Zero, and numerous hash-based systems have embraced small finite fields to accelerate prover speeds. The recent work Binius goes a step further, committing to extremely small values using a Tower of Binary Fields – an approach that has ignited excitement within the research community. Elliptic curve-based systems, on the other hand, have focused on committing only to small elements, as demonstrated in JOLT/Lasso. This technique results in faster polynomial commitments by leveraging the speedier multi-scalar multiplications (MSMs) that arise when dealing with smaller scalar values. The primary work done in folding scheme (Nova, Protostar), is a per step commitment to the witness which can be done with any homomorphic commitments schemes (CS).

**Universally, improvements in PCS (polynomial commitment schemes) result in faster provers.**

Why do we all not just use the same techniques for speed? Fundamentally it seems that elliptic curve (EC) based systems cannot use the benefits of smaller fields. The reason being is that EC need large prime fields for security. In EC based PCS each item committed to is a scalar from the large field, making for more expensive operations. On the other hand, hash based systems are more challenging to use with recursion, a common technique to make ZK systems work on devices and networks with memory constraints. There is recent research where both types of systems are used at once, but even here the benefits of the folding scheme for space efficiency is not clear. Elsewhere.. there is another often overlooked contender — lattice based systems that can use smaller fields and support the homomorphism needed for recursion! LatticeFold was a paper released at a similar time and is another new 'best of both worlds' approach. We will return to lattices a bit further down.

Despite the advancements, zero-knowledge proving systems are still estimated to be a staggering 50,000 times slower than running the underlying program. As we continue our relentless pursuit of speed, where can we hope to find the next performance breakthrough?

### Parallelization

This is the low hanging fruit for many zkVM teams that has loads of intricate complexities. Why not just break the problem apart and run it on many machines at once? Many call this method continuations or sharding. Depending on the problem set, this indeed results in orders of magnitudes increase in final proof speed. While this is ideal in dealing with large proving tasks, it is still not a catch-all. For instance, privacy is not possible in this setting as you give the zk-witness to multiple remote provers.

Some problems are not suitable for parallelization but where they are parallelization is great!

### Precompiles and Non-Uniform Incremental Verifiable Computation

Precompiles, a technique championed by many in the zero-knowledge proving industry, can be employed with both hash-based and elliptic curve-based zkVMs. Precompiles allow for the optimization of commonly recurring code prior to compilation. Common hash functions that may appear multiple times during the execution of a zkVM, for instance, can be translated into 'precompiled' zkCircuit opcodes, enabling quicker execution within the zkVM. However, the precompile-centric approach has a drawback: someone needs to create precompiles for each potential issue they may encounter. This is true whether the precompiles are circuits or direct proving components done a programming language. Regardless of the zkVM, creating precompiles is not a trivial task, as it requires zero-knowledge-specific expertise. Furthermore, it introduces a new attack vector, as not all precompiles can be audited.

At NovaNet, we advocate for an alternative approach called non-uniform incremental verifiable computation (NIVC) (SuperNova). We have open-sourced work for this and plan to continue contributing to the ecosystem. NIVC allows creators of zkVMs to leverage adaptively sized zkCircuits tailored to the prover's memory specifications. You can execute very large steps on powerful machines with lightning speed, or prioritize privacy by using smaller steps for reduced memory usage. NIVC enables the grouping of recurring opcodes into a single IVC set comprised of multiple opcodes, serving as the NIVC equivalent to precompiles without the need for manually adding specific precompile circuits. If the execution trace exhibits patterns, the zkVM can optimize those patterns to run quickly on the fly with existing audited zkCircuit opcodes, without human intervention.

NIVC's versatility extends to incorporating the verification circuits of other proof systems within a larger system. For example, you could integrate the JOLT or SP1 verifier as a circuit within your NIVC system, enabling anyone worldwide to submit proofs to your NIVC layer using those provers. At NovaNet, this and more is being done with even more specialized zkCircuits tailored for zkML and DePin-specific use cases where specialized provers are the only way to tackle the complex problem sets in an efficient way.

NIVC is the ideal system for a decentralized prover network, as it allows for extreme speed while also providing optimal control for privacy and memory-constrained provers. It serves as an ideal foundation for such a system, incentivizing contributions from hardware-accelerated provers, researchers with potentially faster schemes, and even ordinary computers, all earning from creating optimal proofs.

**"Cool, so any proving scheme can contribute in an NIVC based system. Anyway to make the NIVC proving scheme faster?" - anon.**

NIVC systems are currently elliptic curve-based, owing to the homomorphic properties of EC-based polynomial commitment schemes (PCS). This means that even with prover work complexity of O(F), where F is the circuit size, prover speed is limited by the efficiency of multi-scalar multiplications (MSMs) involving large scalar values. In spite of this, current IVC based system still show competitive if not superior performance in many settings. Researchers are developing faster PCS that we expect will further enhance proving times in the NIVC setting. However, lattice-based PCS are becoming increasingly attractive. Until recently, most lattice-based PCS exhibited slower provers with larger proof sizes compared to their hash or EC-based counterparts. Recent open-sourced code with benchmarks, however, demonstrate performance approaching that of Brakedown and FRI. With minor modifications to their code, we were able to achieve speeds close to those observed with EC-based Pedersen commitments. Lattice PCS are becoming fast!

This development is significant because lattice-based systems operate over 16-bit values rather than the large 256-bit scalar values used in EC systems. Standard hardware can efficiently accelerate computations at this word size. Lattice-based systems offer quantum-resistance, a benefit often touted for hash-based systems. They feature transparent setup, eliminating the need for a trusted setup ceremony. They are homomorphic and can be used for optimal recursive proving. Furthermore, they circumvent the issue of cycles of curves, which has been a major headache for IVC systems. We anticipate that research in this area will yield a competitive PCS with next-generation proving times. This will give super powers to NovaNet, the upcoming global, NIVC based, incentive and decentralized prover network.

Testnet in summer follow along -> https://x.com/NovaNet_zkp