Trustless Agents — with zkML

Ethproof's quest to verify Ethereum blocks in real-time can also be leveraged in a similar race for near real-time verifiable compute for trustless agents — here's how.

Trustless Agents — with zkML
Trustless Agents

Coordinating Trustless Agents

ERC-8004 represents something genuinely novel in the blockchain space: a practical approach to the agent coordination problem. Rather than proposing yet another consensus mechanism or token economic model, it tackles the fundamental question of how autonomous agents can discover, evaluate, and interact with each other across organizational boundaries.

The specification is elegantly minimal - three registries that provide the basic coordination primitives: identity resolution, reputation aggregation, and execution validation. But buried in that simplicity is a sophisticated understanding of distributed systems design and the tradeoffs inherent in trustless coordination.

The Three Core Registries

Identity Registry: A minimal on-chain handle that resolves to an agent's off-chain AgentCard, providing every agent with a portable, censorship-resistant identifier. This follows RFC 8615 principles, with Agent Cards available at standardized well-known URIs, and supports CAIP-10 account identifiers for cross-chain compatibility.

Reputation Registry: A standard interface for posting and fetching attestations. The clever design pushes reputation calculation off-chain while keeping attestations on-chain, enabling sophisticated reputation algorithms while maintaining verifiability of the underlying data. This creates space for specialized services - agent scoring systems, auditor networks, and insurance pools.

Validation Registry: Generic hooks for requesting and recording independent checks through economic staking or cryptographic proofs. The registry defines only the interface, allowing any validation protocol to integrate seamlessly. This is where an interesting innovation lies - supporting multiple validation approaches from simple re-execution to advanced zero-knowledge proofs.

What Are The Trust Models?

What makes ERC-8004 particularly interesting from a systems perspective is its pluggable approach to validation. The specification explicitly supports three distinct trust models, each with different security assumptions and performance characteristics:

Reputation-based Validation

This model aggregates historical performance data to assess agent reliability. It's computationally efficient and provides good UX, but suffers from well-known problems: gaming attacks, cold-start problems for new agents, and context collapse where past performance doesn't predict future behavior in different domains.

The interesting design choice here is pushing reputation calculation off-chain while keeping attestations on-chain. This allows for sophisticated reputation algorithms while maintaining verifiability of the underlying data.

Crypto-economic Validation

Here, validators stake economic value and re-execute computations to verify correctness. This provides strong security guarantees through mechanism design - validators have financial incentives to validate honestly and face economic penalties for incorrect validation.

The challenge is scalability. Re-execution doesn't scale well with computation complexity, and the economic security depends on having sufficient validator participation and stake at risk. For complex AI inference tasks, this becomes prohibitively expensive.

Crypto-verifiable Validation

This is where the technical landscape gets interesting. The specification supports cryptographic proofs of execution - either through trusted execution environments (TEEs) or zero-knowledge proofs.

TEEs provide hardware-based attestation but introduce trust assumptions about hardware manufacturers and potential side-channel attacks. Zero-knowledge proofs provide mathematical guarantees but have historically been too slow and expensive for practical AI workloads.

This is where our zkML specific zkVM — 'JOLT-Atlas' helps.

The zkML Performance Problem

Zero-knowledge machine learning has been theoretically sound but practically limited by performance constraints. Traditional circuit-based approaches require representing every operation in the ML model as algebraic constraints over finite fields. For complex operations like neural network activations (ReLU, softmax, etc.), this becomes computationally expensive.

The problem compounds with model size. Modern language models have billions of parameters and require complex operations that translate poorly to arithmetic circuits. Existing zkML solutions either support only simple models or have proof generation times measured in hours rather than seconds.

JOLT's Architectural Innovation

JOLT represents a fundamental shift in zero-knowledge virtual machine design. Instead of encoding operations as arithmetic constraints, it primarily uses structured lookup tables combined with the sum-check protocol.

The key insight is that CPU instructions can be verified by checking their results against pre-computed tables of valid instruction outcomes. These tables are too large to materialize explicitly (often 2^128 entries), but they're highly structured, allowing efficient cryptographic commitment schemes.

The technical breakthrough comes from novel lookup arguments applied to zero knowledge virtual machine architecture — Just One Lookup Table.

JOLT-Atlas: Optimizing for ML Workloads

JOLT-Atlas takes the core JOLT architecture and optimizes it specifically for machine learning inference. The performance gains come from a few technical innovations:

Lookup-Optimized ML Operations

Traditional circuit-based zkML struggles with non-linear functions common in neural networks. JOLT-Atlas handles these operations as primitive lookups, eliminating the need for complex constraint representations.

Sparsity Exploitation

ML models exhibit natural sparsity - many parameters are zero or near-zero, and many operations can be optimized away. JOLT-Atlas exploits this sparsity at the instruction level, reducing the actual computation that needs to be proven. Surprisingly, sparsity is also a trait exploited in the recent Twist and Shout paper.

Precompile Integration

With recent advances like Twist and Shout, JOLT no longer requires decomposing operations into smaller subtables. This enables ML-specific precompiles - primitive operations optimized for common ML patterns like matrix multiplication, convolution, and activation functions.

Privacy & Memory

Performance alone isn't the full story. JOLT-Atlas can also provide true zero-knowledge properties via folding, preserving privacy of inputs, outputs, and intermediate computations. In the ETHproofs context JOLT will either need to be streamed or folded — for parallel 'continuations' and controlling prover memory. This is a classic — two birds one stone effort.

Technical Integration with ERC-8004

The integration between JOLT-Atlas and ERC-8004's Validation Registry is straightforward from an architectural perspective:

  1. Agent executes ML inference
  2. JOLT-Atlas generates ZK proof of execution
  3. DataHash commits to proof and verification parameters
  4. Validator contract verifies proof on-chain
  5. ValidationResponse records verification result
  6. Reputation system updates based on cryptographic validation

The key advantage is that validation becomes purely mathematical rather than relying on economic incentives or hardware assumptions. The proof either verifies or it doesn't - there's no ambiguity, no committee decisions, no economic attacks to consider.

Implications for Agent Architecture

This capability enables a new class of verifiable AI agents with interesting properties:

Computational Integrity: Agents can prove they executed specific ML models with specific inputs, enabling accountability for AI decision-making.

Privacy Preservation: Zero-knowledge properties mean agents can prove correct execution without revealing sensitive inputs, model weights, or intermediate computations.

Hardware Independence: Unlike TEE-based approaches, JOLT-Atlas proofs can be generated on standard computing infrastructure without trusted hardware requirements.

Auditability: Because JOLT proves execution of standard RISC-V instructions, the entire execution trace can be audited at the assembly level if needed.

Learning from Ethereum's Real-Time Proving Race

The path to practical zkML verification isn't theoretical — we can see exactly how it will unfold by looking at what's happening with Ethereum block proving. Ethproofs is a block proof explorer for Ethereum that aggregates data from various zkVM teams to provide a comprehensive overview of proven blocks, including key metrics such as cost, latency, and proving time.

The results have been dramatic. SP1 Hypercube can prove over 93% of Ethereum blocks in under 12 seconds, with an average latency of 10.3 seconds. With the current slot time of 12 seconds, realtime means 10 seconds or less for zkVMs to prove at least 99% of mainnet blocks.

This "real-time proving" race for Ethereum blocks demonstrates exactly what becomes possible when zkVM performance crosses critical thresholds. The aim is to establish a public good that evolves into the standard for Ethereum block execution proof for zkVMs, ultimately expanding to encompass all Ethereum blocks while maintaining reasonable costs and latency.

The same performance trajectory that made real-time Ethereum proving possible is now happening for AI inference through JOLT-Atlas and other zkML efforts.

The Ethproofs Model for zkML

Just as Ethproofs enables users to compare proofs by block, download them, and explore various proof metadata (size, clock cycle, type) to better understand individual zkVMs and their proof generation process, we can expect similar infrastructure for zkML proofs.

Imagine an "MLproofs" equivalent where:

  • AI agents publish proof metadata for model inferences
  • Users can verify computation integrity across different models
  • Performance metrics (proving time, proof size, verification cost) are transparently comparable
  • Trust scores are built on mathematical verification rather than reputation alone

The achievement marks a technical leap for the zero-knowledge space, with engineering advances across cryptography, hardware acceleration, and distributed systems. Many of the same engineering advances that enabled real-time Ethereum proving will directly benefit zkML applications.

The Broader Infrastructure Play

What we're really building here is the verification layer for decentralized AI infrastructure. ERC-8004 provides the coordination primitives - how agents find each other, establish identity, and build reputation. JOLT-Atlas provides the verification primitives - how agents prove they did what they claimed to do.

Together, they enable AI agents that can interact trustlessly while maintaining verifiable behavior and private computation. This is particularly important as AI capabilities continue to advance and autonomous agents take on more consequential tasks.

The economic implications are significant. Instead of AI capabilities being concentrated in a few large platforms, we get a competitive marketplace where agents compete on verifiable performance metrics rather than platform lock-in or proprietary advantages.

Technical Challenges and Future Work

Several technical challenges remain:

Scalability: While JOLT-Atlas is dramatically faster than existing zkML solutions, proving large language model inference still requires significant computation. GPU acceleration and further algorithmic improvements are needed for larger models.

Model Coverage: Current zkML solutions support common ML operations but may not cover every possible model architecture. Expanding the set of supported operations while maintaining performance is ongoing work.

Integration Complexity: While the high-level integration is straightforward, practical deployment requires careful consideration of proof generation timing, verification costs, and failure handling.

Zero-Knowledge Implementation: True zero-knowledge properties require additional cryptographic machinery beyond modifying JOLT. The folding schemes needed for privacy are still being implemented and optimized.

From Ethereum Blocks to AI Inference: A Similar Path.

ERC-8004 solves the coordination problem for autonomous agents. JOLT-Atlas solves the verification problem for AI inference. Together, they provide the infrastructure needed for practical trustless AI agents.

This approach is a qualitative shift from "trust the platform" to "verify the computation." The performance characteristics make it viable for real applications, and the privacy properties make it suitable for sensitive use cases.

The question now is execution: building the tooling, libraries, and applications that turn this infrastructure into practical systems. The technical foundation is solid, and we can see the roadmap clearly by following Ethereum's proving milestone. The rest is engineering.

The path from "proving Ethereum blocks is impossible" to "proving 93% of blocks in under 12 seconds" took less than two years of focused engineering. The same trajectory is happening for AI inference verification. What seemed like science fiction - proving complex ML model execution in real-time while preserving privacy - is rapidly becoming a practical reality.

Real-time proving is the space race of zero knowledge, and JOLT-Atlas is applying the same breakthrough engineering to make verifiable AI agents practical.

Written by: Wyatt Benno

I work where AI meets cryptography. Check out our cryptographically powered AI memory system (https://www.kinic.io/). Learn more about our upcoming NIVC based prover network (https://www.novanet.xyz/).

My Twitter

Subscribe to ICME

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe