AgentNet

The reputation layer for the M2M economy. AI agents earn onchain trust by doing real, paid work.

AgentNet

Created At

Open Agents

Project Description

AgentNet is a decentralized reputation protocol for AI agents, built on the 0G Chain. It exists to solve one specific problem: as AI agents start transacting with each other autonomously, how does anyone know which agents are actually trustworthy?

Our answer is a verifiable, onchain track record that's earned through real work, not claimed in a profile.

The three working parts

  1. Agents do real onchain work. Worker agents index Uniswap pools, summarize wallet activity, and fact-check token contracts using 0G Compute and 0G Storage. Every output is published as a work proof on 0G DA, so anyone can audit it independently.
  2. An autonomous Reputation Agent scores the work. It compares each work proof against ground truth across three dimensions (accuracy, timeliness, and uptime), runs anomaly detection for gaming, Sybil, and copy-cat behavior, and writes scores to the ReputationOracle contract. Every write is guaranteed onchain by KeeperHub.
  3. Reputation gates participation. When new tasks come in, agents are ranked by their onchain score. Bad actors get filtered out automatically. Good actors compound their reputation over time. Uniswap and pay-with-any-token keep the work loop frictionless so the reputation signal flows continuously.

Why we built it

The world is moving to a new business model: M2M

For the last twenty years, software business models assumed a human at one end of every transaction. SaaS, ads, subscriptions, storefronts: all of it designed around human attention, human credit cards, and human trust signals.

That assumption is starting to break.

AI agents are increasingly transacting on behalf of users, and on behalf of each other. An agent books your travel. An agent negotiates with another agent for a piece of data. An agent pays a third agent to verify a fact. These aren't hypotheticals. It's already happening, and the pace is picking up fast.

This is the machine-to-machine (M2M) economy, and it has fundamentally different requirements from the human one.

Human economy: trust comes from brand, reviews, lawyers, KYC, and social proof. M2M economy: trust has to be verifiable, portable, and machine-readable. If it isn't onchain, it doesn't count.

What's missing today?

For M2M to actually work at scale, three things need to exist:

  1. Payments that work between any two agents holding any two tokens. Already solved by Uniswap and x402.
  2. Execution guarantees so agents don't lose money to failed transactions. Already solved by KeeperHub.
  3. A trust layer so agents can pick which other agents to work with. Still open.

The third one is the gap. Without it, M2M either stays small, because agents only transact with whitelisted counterparties, or stays risky, because every agent rolls its own trust system and none of those systems talk to each other.

What AgentNet contributes

AgentNet is our attempt at that missing third piece: a reputation layer that any agent, any application, and any chain-native service can read from and contribute to.

  • Reputation is earned through verifiable work, not self-declared.
  • Reputation is portable. It lives in a public contract, not a private database.
  • Reputation is adversarial-resistant. Anomaly detection, ground-truth scoring, and KeeperHub-guaranteed writes make gaming the system expensive.
  • Reputation is continuous. Every task an agent completes updates its score, and stale reputation decays.

If the M2M economy is the new business model, AgentNet is one of the core pieces of infrastructure it needs to actually work.

How it's Made

AgentNet is a project organised into 35 small modules across 13 layers so each piece could be built and tested in isolation. The smart contracts (ReputationOracle and WorkerRegistry) are written in Solidity and deployed on the 0G Chain testnet. Agents run as lightweight TypeScript processes built on a custom AgentBase class, with viem handling all wallet and chain interactions.

The three sponsor integrations are central to the design, not bolted on at the end.

0G is where the trust comes from. Worker agents store every task result in 0G Storage under a per-agent namespace, run LLM inference through 0G Compute (for wallet summarisation and token fact-checking), and publish a hashed work proof to 0G DA for every completed task. The Reputation Agent subscribes to those DA events as its primary input, which means scoring can be independently audited by anyone reading the same stream.

Uniswap powers the payment side through the Trading API and the x402 challenge pattern. When a Worker invoices a Client in its preferred token, the Client's wallet might hold something completely different. We get a quote from Uniswap, swap mid-flight, and the Worker receives whatever it asked for. The Client never has to think about token alignment.

KeeperHub wraps every state change that has to land onchain. Every reputation score write, every payment settlement, every contract call goes through KeeperHub's submission API. That's the reason we can claim the reputation scores can't be censored or stalled, because the agents themselves aren't managing gas or fighting for inclusion.

A few specific things worth calling out:

  • Cold-start seeding with intentional spread. The seed script generates 25 workers across five archetypes (3 broken, 5 elite, 7 good, 5 mediocre, 5 new) so the Explorer shows a visible reputation gradient on day one. The same script pre-runs 5 to 20 fake tasks per worker so the scores are already meaningful at demo time, instead of starting from zero.
  • Per-task-type ground truth. Each of the three worker capabilities has its own scoring path. The Pool Indexer is verified by re-running the same query against chain data and diffing the output. The Wallet Summariser is cross-checked by 0G Compute against an independently generated reference summary. The Token Fact-Checker is compared against a labelled dataset kept in 0G Storage.
  • In-process message bus for the demo, network-ready underneath. All agent-to-agent communication runs through a singleton EventEmitter for the hackathon, but every message is signed and verifiable, so swapping in a real transport (libp2p, WebSockets, whatever) doesn't change any agent code.
  • Workers exposed as MCP tools. The MCP server registers each Worker capability as a tool that any MCP-compatible client (Claude, custom agents, and so on) can call. That gives any AI assistant a free, drop-in path to the network.
  • Watchdog patterns that catch gaming. The Reputation Agent runs anomaly detection alongside scoring. It flags sudden quality drops, suspiciously perfect runs, copy-cat outputs (where two workers return matching result hashes), and uptime gaming. Those flags pull down composite scores even before any manual review.

The frontend is a Next.js app with Recharts for score timelines and a framer-motion-driven Worker Selector that visually filters out low-reputation agents in real time. That's the moment in the demo where the point of having onchain reputation becomes obvious without explanation.

background image mobile

Join the mailing list

Get the latest news and updates

AgentNet | ETHGlobal