Succinct Verification, The Key to AI.

It may seem presumptuous for someone steeped in cryptographic proof systems to claim insight into the future of AI. After all, if there’s a better path forward, why not just build it and let the results speak for themselves?

Fair. But some problems are structural — and it’s precisely at the structural level that Zero-Knowledge Proofs (ZKPs) offer something transformative.

Here’s the core insight:

💡
An AI system can only truly create and maintain knowledge to the extent that it can verify that knowledge, on its own, succinctly and reliably.

This is my modified Verification Principle originally discussed by Rich Sutton in 2001— and it’s not just a philosophical stance, it’s a systems constraint. If a model cannot prove to itself (or others) that a fact, a belief, or a behavior is valid under its own operating assumptions, then the burden of validation shifts externally. Humans must intervene, inspect, correct. And as the system grows, so does this burden—rapidly exceeding what anyone can meaningfully maintain. If we do produce hard-coded automated verifiers, their results must be succinctly verifiable, if not the whole system will grow unimaginably complex.

We’ve been here before. Early expert systems were programmed by hand: if-then rules stacked atop each other to model knowledge. But as the rule count grew, so did the complexity — and with it, brittleness. Interactions became unpredictable, behavior erratic. Systems broke. The workaround? Brute-force searchers like Deep Blue. Why did they succeed? Because at the point of decision, they could verify internally — through massive search — that a move was good. They didn’t rely on heuristics hand-coded by humans. They constructed evidence for their beliefs.

But that verification was shallow. Deep Blue could verify moves, but not the scoring function that guided its long-term planning. That was still a fixed, opaque artifact—human-tuned and unverifiable. TD-Gammon took a step further: it learned and improved its own evaluation function. That mattered. Self-verifying subsystems are resilient. They can scale.

Today’s AI systems — LLMs, RL agents, planners—are increasingly powerful. But at the level of knowledge, they remain fragile. Large language models contain gigabytes of implicit knowledge, yet they cannot verify even the simplest of their outputs. Ask them “do birds have wings?” and you’ll get an answer, maybe even a chain-of-thought justification — but no proof, no audit trail, no cryptographic guarantee that what they "know" is anything more than plausible pattern matching.

ZKPs change that. With modern zero-knowledge tooling, we can start to attach succinct, cryptographic proofs to machine-learned beliefs:

  • That an inference followed from a model with known weights.
  • That a decision was produced under specific constraints and rules.
  • That an update was honestly computed on a real dataset.

This turns “knowledge” into something portable, verifiable, and durable.

Without this capability — internal self-contained succinct verification—we cannot scale AI to domains that demand trust. We will continue to build systems that need to be babysat, patched, and debugged by human operators. And every attempt to scale beyond what a human can oversee will eventually hit the wall of unprovable complexity.

As we say in cryptography:

“Don’t trust. Verify.”

In AI, we might say:

“Don’t just learn. Prove that you’ve learned.”

Until AI can do that—autonomously, efficiently, and convincingly—it won’t matter how big the models get. They’ll still be brittle. They’ll still require people to monitor their knowledge. And we’ll still be programming systems that are, in the words of the old joke, bigger than our heads.

GKR-based zero-knowledge machine learning (zkML) currently represents the state of the art in both speed and support for complex ML operations. But here's an ironic prediction: the fastest zkVMs—designed initially for general-purpose computation—will ultimately unlock breakthroughs in zkML as well.

Each order-of-magnitude improvement in proving speed as we evolve beyond domain-specific targets like RISC-V, or WASM toward highly optimized zkVMs will not just benefit applications like zk rollups or validity proofs. With the right modifications (adding ONNX) and many of the same techniques (sum-checks and lookups), these same zkVMs will become the cutting edge in zkML — offering a unified, verifiable execution layer for AI.

Subscribe to ICME

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe