Proof Composition Using Zero-Knowledge Virtual Machines: #RunawayZK

Zero-knowledge proof (ZKP) systems often require composition with other ZKP systems to achieve specific traits, such as privacy or improved on-chain verification (e.g., Groth16). Traditionally, this process involves cryptographic experts meticulously converting the verifier of one system into an arithmetic circuit for use in another. This labor-intensive task is typically proprietary with most relevant code being closed-source or obfuscated β€” a frustrating barrier to anyone looking to find out more details about the process.

Enter the zero-knowledge virtual machine (zkVM), a concept promising to democratize ZKP development. zkVMs allow any developer to create ZKPs using code in familiar programming languages that can be compiled into WebAssembly (WASM) or RISC-V, eliminating the need for specialized knowledge. While discussions often revolve around the fastest zkVM or the one with the most cost-effective on-chain proofs, we never hear about zkVMs with specialized precompiles for general-purpose proof composition.

Yes, there are specialized 'precompiles' for common operations used in scaling Ethereum (keccack256) that come packed into most zkVM libraries! But none that I could find that would allow an arbitrary proving scheme to easily be used as a verifier circuit in a larger proving network. To me this is a bit crazy, as new research papers come out nearly every week, that could be turned into Rust code, and could be benchmarked and composed much more quickly if such a composability system existed.

It's time to explore how zkVMs can revolutionize proof composition and open the door to specialized proving scheme, making these processes more accessible to both developers and cryptographic engineers. By leveraging zkVMs in conjunction with Non-Uniform Incremental Verifiable Computation (NIVC) based systems, we can create a framework that incentivizes researchers and developers to create the most efficient zk-components for a general-purpose proving network.

This approach has the potential to outperform general-purpose-ONLY zkVMs in various applications, including:

  1. Zero-knowledge Machine Learning (zkML) a very specialized field. The verifiers of these systems could become specialized opcodes in an NIVC zkVM.
  2. Zero-knowledge Elliptic Curve Digital Signature Algorithm (zkECDSA).
  3. ZKP specialized in browser based proving. Or other ZKPs that specialized in memory efficiency.
  4. Allowing for any new system to 'get-on-chain' quickly by leveraging the zkVM composition layer. A new ZKP scheme -> NovaNet zkVM -> (Groth16, AVS, etc) -> on-chain.
  5. And many more use cases.

I call this concept #RunawayZK, representing the exponential growth and innovation potential in the field of zero-knowledge proofs when barriers to entry are lowered and composition becomes more accessible and less proprietary trade secret.

By embracing a NIVC based zkVM for proof composition, we can accelerate the development of more efficient, flexible, and powerful zero-knowledge systems. This democratization of ZKP technology will ultimately lead to faster, cheaper, and more specialized systems being adopted in more concepts. Their specialization become simple a'la carte opcodes in this zkVM.

Addressing the Challenges

It's well-known that zkVMs introduce significant overhead, which might initially make them seem ill-suited for proof composition. Undoubtedly, specialized circuits meticulously crafted by experts to compose one proof system into another will outperform a general-purpose zkVM, even if the latter has specialized precompiles for proof composition. However, this perspective overlooks a crucial factor: the human cost and accessibility barrier.

Many proof systems may never be composed in the first place if the barrier to composition remains extremely tedious and specialized. The technical superiority of handcrafted solutions becomes irrelevant if they're too complex or time-consuming to implement in practice.

The zkEngine Approach

If we make the zkVM itself (let's call it NovaNet's zkEngine) the target for this specialization, the work only needs to be done once. For instance, if the zkEngine is already designed to compose into Groth16 for more cost-effective on-chain proofs, any other proving system that converts its verifier into a circuit (or into WebAssembly) for use in the zkEngine will never need to duplicate that effort. This approach opens up possibilities for integrating various proving schemes within larger systems.

Moreover, this model could foster healthy competition among different schemes to earn a premium in both generalized and specialized applications. The ecosystem could evolve to reward the most efficient and innovative solutions.

Rethinking the Overhead Argument

In light of these considerations, the argument about zkVM overhead versus handcrafting becomes less relevant when weighed against the general usability and accessibility of new and specialized systems. The zkVM approach could accelerate the adoption and integration of novel proving schemes published in academic literature or developed by smaller teams who lack the resources for extensive handcrafting.

Benefits of the zkEngine Approach:

  1. Democratization: Lowering the barrier to entry for proof composition
  2. Flexibility: Easier integration of new proving schemes into existing systems
  3. Innovation Catalyst: Encouraging the development of specialized proving systems
  4. Efficiency Through Competition: Driving improvements in performance and usability
  5. Standardization: Promoting a common platform for proof composition

By embracing the zkEngine approach, we can create a more dynamic and innovative ecosystem for zero-knowledge proofs. While there may be a performance trade-off in some cases, the benefits in terms of accessibility, flexibility, and potential for rapid innovation far outweigh the costs.

zkVMs are not just about performance. They are about ease-of-use across the full ZK-stack.

NIVC Folding's Role in Aggregation

Let's discuss aggregation of proofs via NIVC folding. If a verifier circuit, of a very fast privacy enabling browser based proving scheme, is integrated into an NIVC zkVM network, all of its users would become N repetitions of that circuit. Rather than posting proofs individually they could let the network aggregate the proofs to amortize the final L1 posting via folding. The aggregator in this case could be specialized or not. Since the hard work of making aggregation is done at the zkVM level, everyone benefits.

This is in contrast to other methods such as AVS that run the verification multiple times with certain crypto-economic securities, or even others that aggregate with SnarkPack or universal aggregation circuits. The zkVM's aggregator code becomes the target of improvements. Moreover, in the latter case we can avoid the trusted setup since most NIVC schemes are transparent.

Privacy

Many new zkVM schemes are not privacy enabled. They add 'zk' back in after the fact. This can be done by composing with a privacy ZKP schemes like Groth16 or other zkSNARK. Alternatively, This could also be done at the zkVM level if using privacy during folding. This is highlighted in the most recent version of the HyperNova paper, where the authors state they get 'privacy for free' and with no need to use zkSNARKs (which are often more costly to get on-chain).

Wrap-up

Proof composition can become the speciality at the zkVM level rather than at the circuit level. We could achieve this with zkVM precompiles or generic specialized opcodes just for this task. Privacy can also be handled at the zkVM level if the folding operation itself is zk enabled, this of course only applies to specific use-case, where the folding is done locally.

This train of thought, and the properties of NIVC, allow for novel network and incentive structures which can allow for better specialization and generalizability across a wide range of ZKP use-case. #RunawayZK πŸ˜„

Subscribe to ICME

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe