If A 'ZK'-Prover Network Asks For Your Data. Don’t Give It to Them.
![](https://blog.icme.io/content/images/2025/02/novanetzkvm.png)
When many users hear the term zero-knowledge proofs (ZKPs), they assume privacy is built-in by default. After all, “zero-knowledge” sounds like it means “no one knows my data,” right?
Wrong.
In reality, many ZK prover networks don’t care about privacy at all — they care about speed. The dominant model in large-scale proving markets is brute-force efficiency: send your witness data in the clear to an untrusted prover network, where the fastest machine wins the race to compute your proof. These networks are structured around economic incentives, not privacy—whoever can generate proofs the quickest, cheapest, and at the highest volume gets paid.
That means most proving networks expect you to hand over your private data (trust me bro), or they simply do not support privacy use-case at all (always ask). Whether they’re leveraging cloud clusters, specialized hardware, or decentralized networks of provers, the goal is simple: maximize throughput, minimize costs, and scale up proving capacity—even if that means exposing sensitive information along the way.
For some applications, this trade-off makes sense. If privacy isn’t a concern, outsourcing proof generation to an untrusted network reduces costs, speeds up proving times, and eliminates local computation. This works well for use cases like:
✅ Scaling blockchains & rollups, where proofs are a fraud deterrent.
✅ Gaming proofs, if the inputs are already public.
✅ Certain DeFi applications, where the witness data isn’t sensitive.
But if privacy matters, blindly delegating proof work to these networks is a massive security risk. And yet, users may assume they’re safe simply because they’re using “zero-knowledge” tech.
The truth? Without privacy preserving proving, you’re leaking your data to whoever is proving your ZKPs. Local verifiable compute that runs on any device is the true holy grail for ZKP, not scaling blockchains.
The Need for Private Delegated Proving
For applications where the witness contains sensitive or personal data, sending it in the clear is a non-starter. Imagine:
- A user proving they meet creditworthiness criteria without exposing financial details.
- A healthcare provider proving compliance with regulations without revealing patient records.
- A private cryptocurrency transaction where a user needs to prove correctness without revealing amounts.
- Running a machine learning model without exposing biometrics, weights, or RAG data. 🤖
- Proving your location without giving your address...
In all of these cases, a proving model that requires exposing the witness to an untrusted entity breaks the very reason for using zero-knowledge proofs in the first place.
The best version is local or 'client side proving' where no sensitive data ever leaves your device. At NovaNet we specialize in memory efficient ZKP tech that does just this we use folding scheme. While these are quick for many applications it can often unacceptable in others. So, what do we do when we need even more performance?
This is where private delegated proving becomes critical. Instead of outsourcing proof generation at the cost of privacy, private delegation allows a user to offload expensive proof generation while keeping their witness data hidden. The key idea is to design protocols where:
- The witness remains private even from the delegated prover.
- The prover still performs the heavy computation efficiently.
- The system can scale to support large numbers of users without significant overhead.
Private delegated proving is essential for scalable client-side ZKPs—where users, especially those on low-powered devices like smartphones, or DePin devices, need to generate zero-knowledge proofs efficiently without compromising privacy. In the following sections, we'll explore different approaches to delegated proving, including delegated SNARKs, CoSNARKs, zkSaaS, and oblivious issuance of proofs, and how they balance privacy, efficiency, and security.
Different Approaches to Delegated Proving—And Their Risks
Now that we’ve established why blindly trusting a prover network with your witness is dangerous, let’s break down the different ways delegated proving is done today. While each approach has trade-offs in terms of privacy, performance, and security, the key question remains: who sees your witness data?
1️⃣ The Standard Approach: Untrusted Delegated SNARKs
Fast, but your data is an open book.
The simplest and most common approach to scaling ZKPs is delegating proof work to an untrusted prover. In this model, the user sends their witness in plaintext to a high-performance proving system (like a cloud server, a rollup’s proving marketplace, or a decentralized network of provers). The prover runs the computation and sends back a valid proof. This is the model in most use today.
🔴 The Risks
🚨 Your witness is exposed. If the prover is compromised or malicious, they can see, store, and potentially misuse your data.
🚨 Centralization risk. Many proving services rely on cloud giants like AWS or GCP—meaning a single entity can become a surveillance chokepoint.
🚨 No auditability. You’re trusting the prover to behave honestly, with no guarantee they’re discarding your witness after computing the proof.
This approach is acceptable only when privacy isn’t an issue—like optimistic rollups, gaming proofs, or public DeFi transactions. But for anything sensitive? Hard pass.
2️⃣ zk-SNARKs as a Service
A step up, but still has risks.
To address privacy concerns, some teams have proposed zk-SNARKs as a Service (zkSaaS)—a model where proof generation is distributed across multiple untrusted servers. Instead of sending your witness to one prover, it’s secret-shared across many. The Eos approach improves on this by optimizing proof delegation in a way that allows multiple untrusted parties to jointly compute a proof while maintaining efficiency and privacy guarantees under an honest-minority assumption.
🟡 The Limitations
⚠️ Better than plaintext delegation. A single server won’t have full access to your witness.
⚠️ Still not fully private. If a majority of servers collude, they can reconstruct your witness.
⚠️ You’re trusting infrastructure providers. Many zkSaaS models rely on large cloud networks—a privacy risk if compromised.
zkSaaS works well when you need scalability but still introduces trust assumptions that may be unacceptable for truly private applications.
3️⃣ Oblivious Issuance of Proofs
Privacy-first, but requires interaction.
Oblivious issuance takes a different approach: instead of handing your witness to a prover, you interactively generate a proof with a verifier, who never learns the witness. This is commonly used in anonymous credentials, privacy-preserving authentication, and regulatory compliance.
🟢 The Benefits
✅ Your witness stays private. The proof is issued in a way that the issuer never learns the underlying data.
✅ Proofs are unlinkable. No one can trace which interaction generated which proof.
⚠️ The Limitations
⚠️ Requires interaction. You need a verifier to help issue the proof before it becomes non-interactive.
⚠️ Limited to specific use cases. It’s great for identity, attestations, and privacy-preserving signatures, but not always general-purpose.
If you need auditable, privacy-preserving credentials, this is a strong choice. But what if you want fast, scalable delegated proving without giving up privacy?
4️⃣ CoSNARKs – Private Multi-Prover Delegation
The best of both worlds: privacy + performance?
Collaborative SNARKs (CoSNARKs) take the privacy of oblivious issuance and the scalability of delegation—without exposing the witness.
Instead of sending your witness to one prover (like in standard delegation) or many potentially colluding provers (like in zkSaaS), CoSNARKs split proof generation across multiple distrustful parties. Each prover only sees an encrypted, secret-shared fragment of the computation.
🟢 Why CoSNARKs Are Good
✅ Your witness remains fully private—even from the provers.
✅ Malicious provers can’t extract information. The system remains secure even if most provers are dishonest.
✅ Scalable & efficient. CoSNARKs reduce prover overhead while keeping privacy guarantees intact.
⚠️ The Limitations
⚠️ Higher communication costs. Since proving is split across multiple parties, they must exchange data, which can slow down the process in high-latency networks.
⚠️ Requires multiple provers to be online. If some provers go offline, proof generation can stall or require recomputation.
⚠️ Not always optimal for single-prover setups. If only one party is proving, the extra overhead of CoSNARKs may not be worth it.
⚠️ Security depends on the majority being honest. While a minority of provers can be malicious, if a majority collude, privacy can be compromised. 'Trust me bro' becomes 'Trust us bro'.
5️⃣ Delegated Spartan – Lightweight, Fast, and Private
Outsourcing proof work without exposing the witness.
An alternative to CoSNARKs is a delegated proving model built on Holographic SNARK such as Spartan or Marlin, which cleverly offloads the heaviest part of proving to an untrusted party while keeping the witness private. At NovaNet we have been experimenting with this for a while. Many prominent teams and researchers are also trying it out: notably Remco at Worldcoin.
In Spartan-based delegation, the proving process is split into three steps:
- Reducing R1CS/CCS (or Plonkish) to polynomial claims (this is light for the prover).
- Proving the witness polynomial evaluation (also light for the prover).
- Proving sparse polynomial evaluations (this is the heaviest part, taking 90%+ of the work).
The key insight? Step 3 can be offloaded to an untrusted party without them ever seeing the witness polynomial. This provides an order of magnitude speedup while ensuring privacy remains intact.
🟢 The Benefits
✅ 10×+ faster than standard proving.
✅ Privacy-preserving—no witness leakage.
✅ Can batch multiple client proofs for efficiency.
✅ No trust assumptions for untrusted prover.
⚠️ The Limitations
⚠️ Depends on the availability of untrusted helpers.
⚠️ If the helper refuses to compute, the proof stalls.
⚠️ Witness polynomial evaluation scales with the full computation size, meaning larger circuits incur higher on-device costs; there is an upper bound of what you can prove efficiently on-device.
This approach is ideal when you want the efficiency of outsourcing proof work but can’t afford to leak your witness. I also argue this is the best approach for an incentive 'prover network'.
Ease of Use vs. Zero Knowledge: Are We Being Misled?
There’s a massive marketing push in the industry right now around zkVMs and prover networks being “easy to use.” The pitch is simple:
💡 “Deploy your smart contracts with a zkVM!”
⚡ “Scale your rollup with our ultra-fast prover network!”
🚀 “Use zero-knowledge tech without changing your code!”
It sounds great. Plug-and-play zk tech, abstracting away the complexities of proof generation. But there’s something they’re not telling you.
Many of these prover networks and zkVMs have no plan to support privacy use cases. They are based on technologies that have special CPU commands, monolithic hyper scaled 128GB~ memory provers, and in many cases the underlying ZKP schemes they use do not even support 'zero-knowledge' yet.
They’re built for scalability, not security—for cheaper rollup verification, not protecting user data. And that’s fine if you’re transparent about it. But instead, teams are marketing their solutions as if they’re advancing 'zero-knowledge' cryptography while conveniently ignoring the fact that they leak every bit of witness data to untrusted provers.
So if you want to know if the ZKP prover network supports privacy, ask about local verifiable compute, CoSnark, or other delegated SNARKs mentioned above.
The Bigger Picture: It's Not Just About Privacy — It's About Verifiable Compute That Scales.
Zero-knowledge proofs are often framed as a tool for privacy, but a real advantage is general purpose, scalable, verifiable compute. The ability to generate proofs that anyone can verify, without relying on trusted infrastructure, has implications far beyond hiding sensitive data. It enables trustless computation, decentralized AI verification, and seamless interoperability between protocols.
For this vision to scale, proving systems must move beyond high-cost architectures and focus on memory-efficient proof generation that can run anywhere. The ideal model is any user can generate proofs locally on their own devices, reducing reliance on centralized infrastructure while making proving accessible to low-power hardware. This isn’t just about privacy—it’s about efficiency, security, and unlocking new applications for zero-knowledge proofs.
Prover networks still have a role to play, but not in their current form. Instead of just computing proofs on behalf of rollups, they should focus on aggregation and proof composition—compressing many locally generated proofs into a single, efficiently verifiable proof for posting to L1. Moveover, they can help out with delegated proving. The model shifts from outsourcing computation to optimizing verification, ensuring scalability without compromising security; It has a decentralizing effect rather than centralizing.
The future of ZK isn’t about blindly making proving faster or easier at the cost of privacy and decentralization. It’s about designing systems where proving is efficient enough to be done locally, with prover networks acting as an incentive layer rather than execution engines.
We 'all' are making ZK a ubiquitous technology.
👋