Real time arbitrage signals with integrated x402 payment execution


Prize Pool
This project is a full stack arbitrage prediction system built for prediction markets. It continuously monitors platforms like Polymarket and Kalshi, identifies markets that describe the same event, and highlights pricing gaps that represent high quality arbitrage opportunities. At the core is a custom vector search engine running on Supabase. Each market is converted into an embedding that captures its semantic meaning. The vector search service then finds conceptually similar markets even when they use very different wording. This allows the system to surface mispriced pairs with precision and reliability.
Supabase Edge Functions power the backend. They fetch market data, generate embeddings, score pricing spreads, store results, and return structured guidance on where the opportunities are. Alongside this, Chainlink CRE workflows run in simulation mode to validate data flows and enrich incoming market snapshots during development. Users and AI agents receive clear, actionable information that directs them to the exact markets and positions needed to take advantage of the inefficiency. The system is optimized for clarity and precision in how it identifies and presents each opportunity.
A custom x402 facilitator is deeply integrated into the backend. This facilitator enables users in the frontend and AI agents in the API layer to request updated arbitrage opportunities directly on Arbitrum One with no accounts, no billing systems, and no intermediaries. The project also includes an EVVM sandbox on the MATE Metaprotocol, which provides a safe environment for simulating the one dollar dataset refresh purchase. Each refresh request is mirrored into a virtual blockchain so users and agents can inspect the structured action without sending real funds.
A dedicated Fastify API service exposes all discovery features to AI agents. Agents can query live opportunities, request semantic comparisons, receive structured arbitrage instructions, and trigger the facilitator powered payment flow to refresh the data. Amp analytics from The Graph enhance the data layer by tracking edge history and persistence over time. The system is built to use Amp datasets in production to compute average edge, range, and quality signals that help users and agents judge how stable a pricing gap has been across recent snapshots.
Together, these components form a complete arbitrage discovery engine for prediction markets. The system semantically matches markets, identifies price inefficiencies, evaluates spreads, and delivers precise guidance to both users and agents. It combines vector search, Supabase backend logic, Chainlink CRE workflow support, an EVVM sandbox for safe previews, Amp powered analytics, a payment enabled x402 facilitator, and an agent ready API to create a seamless experience for discovering arbitrage in the prediction market ecosystem.
This project is built as a full stack arbitrage discovery engine for prediction markets, combining semantic search, real time data ingestion, intent based workflows, and multiple partner technologies across the ecosystem. At its foundation is Supabase, which manages the entire backend pipeline. Supabase Edge Functions fetch live markets from Polymarket and Kalshi, convert each market into an embedding, compute spreads across venues, score arbitrage opportunities, and store structured results in Postgres. A custom vector search system runs inside this pipeline, using semantic embeddings to detect when two markets describe the same real world event even if the wording is completely different. This allows the system to match and compare markets with high precision.
Chainlink CRE workflows are integrated in simulation mode to validate and enrich the ingestion flow. They simulate decentralized data pipelines and provide a clean structure for how external data processes should behave once CRE goes fully production ready for this use case. Even in simulation, they helped stress test assumptions about cadence, error paths, and data normalization. The CRE workflow mirrors the Supabase logic but adds instrumentation, logging, and an event pipeline that makes the ingestion flow repeatable and verifiable.
For refreshed data access, the system uses a custom x402 facilitator deployed on Arbitrum One. This facilitator constructs structured one dollar USDC dataset refresh actions so that users and AI agents can request updated arbitrage data without relying on accounts, billing layers, or custodial systems. To make this safer for exploration and demos, every refresh request is mirrored into an EVVM sandbox running on the MATE Metaprotocol on Sepolia. EVVM acts as a virtual blockchain that gives us a fully isolated environment to preview each x402 action, including the full structured payload, without executing anything onchain or spending real funds. This allowed for the building of an onchain workflow that remains accessible during development and judging.
For analytics, the project is built to use in production The Graph’s Amp database. Each computed edge snapshot will be pushed into an Amp dataset, where it can be queried using SQL for historical analysis. The dashboard will use these queries to power insight panels, showing the average edge, the range of spreads over time, and a quality rating that reflects how stable or noisy a particular pricing gap has been. Amp provides a fast, scalable store for temporal analytics without increasing the complexity of the Supabase backend.
A dedicated Fastify API service exposes all of this functionality to AI agents. Agents can query live opportunities, request semantic comparisons, retrieve Amp powered analytics, and trigger the x402 facilitator refresh workflow. This creates an agent ready interface that can be used for reasoning about prediction markets, identifying profitable gaps, and guiding users to specific positions.
Altogether, the system integrates semantic embeddings, Supabase backend logic, Chainlink CRE workflows, x402 intent construction, an EVVM sandbox for safe previews, Amp analytics for historical scoring, and a clean agent focused API. The result is a coordinated, modular, and deeply instrumented arbitrage engine purpose built for prediction markets.

