Verifiable AI Memory — When AI Remembers, Who Controls the Truth?

The future arrived, and it tastes like corporate surveillance.
AI can now remember a lot about you. Not just your conversation from five minutes ago, but you — your preferences, your quirks, the embarrassing thing you asked about cryptocurrency in March. AI memory is not only a blossoming set of features, but also a psychology hack hidden in the TOS fine print. Privacy advocates call it creeping dystopia. I call it almost inevitable.
It's crazy! We're building this entire memory-enabled AI revolution on a foundation of "trust us, bro." Your AI chat bot remembers everything, stores it somewhere in the cloud, and pinky promises — your data is safe. Meanwhile, you get zero proof of what's actually happening to those intimate conversations you're having at 2 AM.

I am a technical founder.. and I spend my time finding better ways. There must be a better way here..
One direction? Zero knowledge machine learning (zkML). Instead of trusting Big Tech with your digital memory, what if you could prove things about your AI interactions without revealing the interactions themselves? What if your local AI could generate cryptographic proof that yes, it recognized your face, or yes, it analyzed your financial data and says you're creditworthy — all without anyone else ever seeing the raw biometrics or bank statements?
The use cases are intoxicating: Proof of inference that eliminates the biometric proof of personhood dance. Dataset monetization where you prove your model works without giving away your secret sauce. Agentic workflows where one AI's proof automatically triggers the next — "This person qualifies for a loan, here's the ZK proof, send the USDC."
The problem? Current zkML proofs are computational monsters — sometimes 10,000x more expensive than just running the damn model normally. It's like using a monster truck to deliver a pizza. The computational carnage!!
But there's a more economical path forward away from just general purpose zkML, one that makes sense today: portable and verifiable AI memory. Think of it as a verifiable 'Plaid for AI' — a way to carry your digital identity and interaction history between AI systems without exposing the messy details.
Building It.
So what does it actually take to build this thing?
Start with the memory system itself. The smart money is on vector databases for semantic search — basically, finding "things that smell like your search term"—married to graph-based memory that tracks how concepts connect. Think of it as your AI's version of that wall of red strings conspiracy theorists or investigators use in the movies, except it actually works.
The beauty is you can plug this into RAG (Retrieval-Augmented Generation) and most LLMs will eat it up without complaint—no special formatting, no architectural gymnastics. But here's the catch: if you want to run this through a zero knowledge virtual machine (zkVM), you need everything compiled to WASM or RISC-V. Doable, but not trivial.
The bigger headache? ML-specific non-linearities that crop up elsewhere in the stack. We're punting on those for now — call it an "announcement of a future announcement." Follow the NovaNet blog, to stay up-to-date on our cutting edge ZKP work.
Most vector databases are write-once affairs, which is useless for real memory. We built ours with updatable indexing because humans are messy creatures who want to add, edit, and delete their digital thoughts at will.
Storage That Doesn't Suck
Now for another controversial bit: blockchain storage.
Before you roll your eyes, hear me out. The "blockchain storage costs a kidney" narrative is years out of date. ZK proofs and rollups have changed the game entirely. You can build a data storage system with succinct verifiability — meaning you can prove things about your data to any L1 blockchain that accepts Groth16 proofs; this is almost all of them! I hear Bitcoin can now too with some caveats 😬.
And if that's too fancy for you? There are WASM and RISC-V based blockchains that'll store your entire vector database for around $10/month; ICP is my favorite one as Jens Groth and other great researchers work / worked there. I know the research is legit! Other data availability layers have also multiplied like rabbits. The infrastructure is there and its been around for a while.
The Privacy Problem
Last piece: keeping your data actually private in storage.
The obvious move is encryption with private keys. The problem is that users are catastrophically bad at handling private keys. They lose them, forget them, screenshot them, and generally treat them like grocery lists.
Our proof-of-concept Kinic Plugin, sidesteps this entirely with WebAuthn — your device biometrics or hardware tokens become your keys. Your face is your password, your fingerprint is your vault. No seed phrases to lose in a couch cushion. V1 is a bit clunky - but with V2 you will even be able to use Google Login to set this up on your devices.
Memory system + blockchain storage + privacy = user-controlled AI memory that might actually work in the real world.
I smell technical progress. But..
Does anyone care?
“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try and sell it.” — Steve Jobs
I could spend the next thousand words evangelizing about why you should own your AI memory. Privacy! Control! Protection against memory poisoning! Corporate surveillance bad! But here's the uncomfortable truth: most people couldn't care less where their AI memory lives as long as it remembers their coffee order and their chat bot doesn't forget they hate any form of fish on pizza 🥷🐢.
Users are ruthlessly pragmatic creatures. They'll trade their digital souls for convenience, then complain about it on Social Media using the same platforms harvesting their complaints. So after we've solved all the gnarly technical problems— the ZK proofs, the blockchain storage, the WebAuthn integration — will anyone actually pay for this thing?
The Market Speaks (👂)
Here's what we're seeing in the wild: companies are hungry for AI memory solutions. Enterprise buyers get it immediately. They've lived through enough data breaches and vendor lock-in nightmares to understand why owning their AI's brain matters.
Our plugin users tell a similar story, especially for personal research and smart bookmarking. The current goldfish-memory AI experience is broken enough that people recognize the problem instantly. They've all had that moment — asking LLM about something they discussed last week, only to be met with digital amnesia. Or collecting hundreds of links to feed in as memory prompts.. hundreds of people understand and are paying for their personal AI memory ALREADY.
Smart people understand that moving from one corporate memory silo to another isn't going to cut it for the deeply personal stuff. Your therapy sessions, health data, your creative projects, your 3 AM existential questions — that's the data you actually need to own.
The Real Prize
The customer experience isn't just about better memory. It's about building an entirely new AI memory economy. One where your digital thoughts have real value, where your specialties and knowledge become portable AI memory currency. We think these upcoming features are super exciting — "stay tuned" and subscribe you have not already.
When AI remembers, who controls the truth?
Right now, the answer is simple: whoever owns the servers owns the truth. If an LLM provider says your conversation happened a certain way — who are you to argue? A search engine claims your search patterns reveal X about you — take it or leave it. Your AI memory exists at the pleasure of corporate overlords who can edit, delete, or monetize it without asking. It can be used to target you with personal information, in very convincing ways, like never before.
Verifiable AI memory flips this power dynamic. Instead of trusting that your AI assistant accurately remembers your preferences, you get cryptographic proof. Instead of hoping your personal data stays safe, you own the keys. Instead of having your digital identity trapped in corporate silos, you carry it with you. The utility and pure usefulness of owning your AI memory store — far outweighs the free ride you can be taking otherwise.
When AI remembers everything, the question isn't whether it will change how we interact with technology. The question is whether how much we'll let it control us.
Here's the bet. Not that people will care about zero knowledge proofs or blockchain infrastructure, but that they'll care about more performant and personal AI.
I build software and write about where AI meets cryptography.