Nuncius enables anonymous on-chain voting for AI agents using zero-knowledge proofs.
Nuncius is an anonymous coordination protocol for AI agent swarms.
Modern agent systems can already have public on-chain identities, but public voting creates a coordination problem: large or influential agents can pressure smaller agents once votes are visible.
Nuncius solves that.
Five voter agents — Pythia, Ziggy, Capitán Beto, Hypatia, Ada run on five separate Gensyn AXL nodes in an encrypted peer-to-peer mesh.
When a proposal opens on the DisputeDAO contract on 0G Galileo, each agent receives the message over AXL, deliberates using 0G Compute (with its own persona system prompt), writes its working state to 0G Storage KV, generates a Semaphore V4 Groth16 proof of its vote, and submits that proof to the contract — signed from its own funded wallet, not the deployer's.
The contract verifies group membership, scope binding (the proof must commit to the current proposal id), signal validity (Approve or Reject only), and nullifier uniqueness and emits ProofVerified for each. When the last member votes, ProposalResolved(approved, approveCount, rejectCount) fires inside the same tx. The aggregate tally is on-chain. The individual votes are not just hidden; they're cryptographically unlinkable to the registered Semaphore commitments.
The five agents publish their public keys via ENS subnames (pythia.nuncius.eth, etc.) — making each agent's anonymous-voting key human-discoverable while keeping their actual votes private. The cross-chain pointer is intentional: a Sepolia ENS name resolves a Baby Jubjub pubkey that a 0G Galileo verifier checks.
ERC-8004 gave agents public on-chain identities. Nuncius is the layer that makes those identities usable in coordination games without the retaliation risk that public reputation creates.
Nuncius is built around a Solidity DisputeDAO.sol contract deployed on 0G Galileo (chainId 16602). It uses Semaphore V4 for anonymous membership proofs and was compiled with Solidity 0.8.27 using evmVersion: "cancun" to match 0G’s execution environment. The deployment stack is linked in this order: PoseidonT3 → SemaphoreVerifier → Semaphore → DisputeDAO.
The agents are written in TypeScript + Hono and run as five independent processes, each connected to its own Gensyn AXL node. The mesh consists of five separate nodes (ports 10001–10005). I verified multi-hop routing in practice: node-2 can reach node-5 through node-1 without a direct peer entry.
Deliberation runs through 0G Compute on 0G Galileo using qwen/qwen-2.5-7b-instruct. The originally planned DeepSeek provider is currently unavailable on Galileo, so each agent dynamically discovers an active provider at startup through listService().
Cold-start provider limits were a real issue, so I added 429-aware exponential backoff at 500ms, 1.5s, and 4s before falling back to a local Ollama instance. That fallback is what keeps the swarm responsive even when a remote provider is temporarily slow.
Each agent also writes working memory into 0G Storage KV. Stream IDs are salted by DAO address so multiple deployments do not collide. Writes are intentionally fire-and-forget, wrapped in a 45-second timeout, so a slow indexer never blocks deliberation or vote submission.
The voting layer uses five separate funded wallets, one per agent. Those wallets submit the five anonymous proofs. The deployer wallet signs none of the vote transactions, which means the on-chain transaction graph cannot directly link deployment authority to any individual vote.
The frontend is built with Next.js 16, React 19, and Tailwind v4. I designed it as a Sidereus Nuncius observatory: vellum-style panels, astronomical diagrams, a night-sky palette, and animated persona-stars.
It subscribes directly to ProofVerified and ProposalResolved events from 0G Galileo. As votes arrive, the UI streams each agent’s reasoning into the Voces panel. When quorum is reached, the verdict overlay (APPROVE or REJECT) triggers automatically.
On the warm path, this resolves in roughly 45 seconds end-to-end, which is exactly what happens in proposal resolution.
The /api/propose route is intentionally best-effort. It always submits openProposal() on-chain first, then attempts to fan the proposal out across the local AXL mesh.
That means the system still works even if the mesh is temporarily unavailable, for example, on a public deployment where the local agent network is unreachable. In that case the API returns fanout.skipped: true, while the proposal remains live on-chain until agents pick it up.

