Parallel proving and next generation decentralized proving networks

In the near future, zero knowledge proofs (ZKPs) will run on thousands of devices, verifying truth and ensuring privacy. Similar to how advanced cryptography functions seamlessly with protocols like HTTPS today, people will use it ubiquitously in their browsers, operating largely in the background undetected. The only hurdle to this verifiable and more private world? ZKP are often considered slow, non-scalable, and add unnecessary computational complexity – or so the common narrative goes. In this article, I aim to persuade the reader that we are closer to the future than many might think.

With continuous advancements in the performance and portability of proving systems, we can design advanced networks today that break computations into pieces. This approach involves providing large, well-structured chunks to provers with special hardware (aggregation) and smaller, more manageable chunks to network participants with consumer-grade GPUs (privacy). The parallelization and decentralization of proving networks will usher in a wave of adoption, aligning with the full promise of ZKP: scalability, verifiable computation, and privacy.

Before delving into the argument, let's pose a simple question: What is the state of the art for ZKP networks and proof markets? The answer may surprise you, as it did me. In stark, pun not intended, contrast to the public, permissionless, and decentralized networks they serve, many have centralized proving models that prioritize prover speed at the cost of decentralization. Centralized prover marketplaces are being constructed to meet an assumed increase in demand for ZKP (zkApps), without regard to the app consumers who will be producing that demand. Centralized provers often necessitate passing data to servers, in the cloud, compromising the privacy that many associate with ZKP.

Prover networks are structured in a centralized manner because they are primarily built for rollups focused on scaling Ethereum, an admirable endeavor that utilizes one aspect of the promised traits of ZKP: scalability. However, this has 'proven' to be a double-edged sword. While significant advances in production-grade ZKP technology have been realized by projects focused on scaling ETH, they often overlook privacy, despite the term "zero-knowledge" in their name.

Prover marketplaces typically follow the lead of prover networks, opting for centralized models. Many of their tech stacks and prover systems require that state updates be run by a single prover at a time. They were not made for parallel state updates. Network participants race to produce proofs, with only the winner receiving the reward. Alternatively, marketplaces may use proof-of-stake, where a prover is randomly selected based on the weighted staking amount. All these systems assume that having only one prover is the most efficient way to create proofs.

What about privacy chains?

In contrast to this, Aztec, a privacy-centric rollup, focuses on truly decentralized and permissionless provers via a federated network. Check out their CEO, Zach, speaking about it here. Privacy chains have a stronger motivation to decentralize provers in their network because any truly 'private' transaction needs to be run locally, exposing only the final proof to the network. Local proof generation is true privacy centric ZK.

Middle-layer provers can focus solely on the scalability problem, while user nodes can concentrate on running their computations privately. This type of 'client-side ZK' is often considered unfeasible. Proving things locally must be slow and result in a terrible user interface, right? Not necessarily. If you compare it in light of the type of things people will be proving locally, for example, simple DeFi transactions, client-side ZK has already been widely used in systems like TornadoCash, where all prover work is done in the browser, 100% locally on users' machines. In zkGames for zk-Fog-Of-War items, game items are held locally and proofs are posted about those items to the wider blockchain. Lastly for small zero-knowledge machine learning (zkML) applications run locally, showing proofs of well formed results is often quicker than running the whole inference in a ZKP framework. There are very large problem sets where client-side proving makes perfect sense and works well enough albeit with a slower or modified user interface.

Why does everyone say we need special prover hardware then?

Once again, this comes back to the type of computation that needs to be proved. If you are aggregating thousands of transactions to be bundled into one proof that is posted on an L1, this workload could require special 'provers' to make it happen in a time-effective way. In many such systems, provers are designed to be very efficient, while sequencers are set to batch periodically. End-users often think that ZK is the bottleneck in these systems, whereas, in fact, it is purposeful waiting for a cost-performant batch of sequencing to be finished. Special hardware can also be very useful for 'problem-set'-specific applications like zkML, where more general proving systems lose out to specialization. The same is true for specialized software circuits; generally, a specialized circuit will outperform a more general-purpose zkVM. Specialization does have a role to play, but it should not be the assumed default going forward, particularly when we get 10x~ fold advances in proving scheme multiple times per year.

What is the 'ideal' prover network architecture?

This statement might not win me any friends in the zk-hardware space, but the fact remains that, in the ideal world, our ZKP systems would not require special hardware at all. The proving systems would be efficient enough that all problems could be processed quickly on consumer hardware. While we're not quite there yet for many problem sets, I believe we'll eventually reach that point. This optimism stems from the ongoing development of proving systems that increasingly work well with consumer CPUs through the use of smaller fields and structures, with Binius being the most recent example of this trend.

In my view, the ideal system for a ZKP infrastructure is a peer-to-peer proving system where privacy can be maintained, and the best specialized provers can be fully utilized. Users should have the flexibility to decide between privacy and parallel proving. The network can then optimize for special proving opcodes and more general Virtual Machine (VM) opcode computation. Computational workhorses can be employed to aggregate and post on Layer 1 (L1). Instead of fostering prover competition, the network should incentivize maximal cooperation. An ideal prover system could draw inspiration from the collaborative peer to peer techniques already in production on IPFS.

As a reminder or introduction, IPFS, or the InterPlanetary File System, is a decentralized protocol designed to create a peer-to-peer method of storing and sharing hypermedia in a distributed file system. Unlike traditional centralized server-based approaches, IPFS operates on a distributed network, where each user's device contributes to both hosting and retrieving content. This is very similar to the case of our, parallel, and decentralized peer to peer proving network for ZKP. Each host can contribute their own compute and more specifically GPUs to solve larger problems, or to aggregate efficiently. The network can distribute work to special provers where needed (high throughput), but can also adapt to meet network demand. Everyone in our network can be a prover, eliminating reliance on a single point of failure.

How feasible is a peer-to-peer prover network right now?

The optimal scenario for centralized prover networks aligns with the optimal scenario for a peer-to-peer prover network. There is nothing that prevents a special, hardware enhanced, prover from joining a peer-to-peer network in some capacity, like aggregation and specialized circuit execution. Moreover, by leveraging the same proving schemes as special opcodes with the improved incentives, through cooperative game-theoretic optimization, a peer-to-peer network becomes an attractive alternative to centralized and even federated ZKP prover scheme. In our general-purpose network, most functionalities can be transformed into verifier circuit-opcodes, enabling the remarkable speed of centralized provers and the exceptional portability and privacy of local systems like TornadoCash (Circom). These verifier circuits can be made from nearly any proving scheme where the verifier can be represented as a circuit in R1CS.

If there is any hope for the widespread adoption of ZKP in this decade, we argue that our work is essential. Even if, in the short term, there are some problem set where we choose to utilize specialized provers in dedicated hardware farms alongside local provers with GPUs, a peer-to-peer framework remains uniquely capable of delivering on all three promises of ZKP: scalability, privacy, verifiable computation.

IVC (incremental verifiable computation) via folding is an ideal base for such a system. Provers can drop off the network and another prover can simply pick up the computation. Folding schemes like Nova are extremely memory efficient allowing it to be run on more types of machines. We can break down problems into appropriate sizes for the prover hardware attacking the issue. We can also allow for special verifier circuits as opcodes. This will permit advanced hardware provers to run where they are quickest and allow for privacy where it is most important. Parallel NIVC (non-uniform incremental verifiable computation) provers will accelerate proofs and make them more practical for every day use.

At NovaNet we are building a peer-to-peer network which encourages a cooperative, verifiable, and privacy-centric zero-knowledge world.

Subscribe to ICME

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe