0xBook

On-chain order book DEX with parallel execution (23 TPS) built with Hardhat 3 on Arcology

0xBook

Created At

ETHOnline 2025

Project Description

Order books work perfectly on Coinbase and Binance, but nobody's been able to make them work on blockchain. The problem is pretty straightforward: when 100 traders try to place orders at the same time on Ethereum or other chains, all those transactions get stuck in a queue and process one after another. This creates terrible delays and makes the whole thing unusable for actual trading. Every project that tried to build an on-chain order book hit the same wall - blockchains process transactions sequentially, so 100 concurrent orders literally take 100 times longer than a single order. You can't run a real exchange like that.

0xBook fixes this by using Arcology Network's ability to run multiple transactions at once. We organized our order book so that different price levels ($3000, $3005, $3010) write to completely separate storage locations. This means Arcology can process them simultaneously instead of waiting for each one to finish. We tested this on Arcology's DevNet and hit 23 transactions per second with zero conflicts - even when we had 100 people all trying to trade at the exact same price at the same time. The system includes everything you'd expect from a real exchange: an order book for limit orders, automatic matching between buyers and sellers, an AMM for backup liquidity, and smart routing that finds you the best price across both systems. The benchmarks prove this actually works. We ran three different stress tests: first, 20 orders at different prices hit 7.4 TPS with perfect success; second, 10 people updating the same counter at once showed zero data loss; third and most important, 100 orders at the same price achieved 23.4 TPS with zero conflicts.

We built this using Hardhat 3, the brand new version that just came out, with full ES Module support and modern tooling. The project has over 650 lines of production Solidity code, all our tests pass, and we've got comprehensive docs plus working demos that show 500+ TPS on local testing. It's a complete system, not just a proof of concept.

How it's Made

This project uses Solidity 0.8.19 for the smart contracts, Hardhat 3.0.9 for development, and Arcology's concurrent library for parallel execution. We used OpenZeppelin's security contracts for protection against reentrancy attacks and built the system with four specialized contracts. The core innovation is how we organize order storage - instead of putting all orders in one place, each price level ($3000, $3005, $3010) gets its own storage slot using mapping(uint256 => uint256[]) buyOrdersByPrice. This seems simple but it's the key to everything: when orders hit different price levels, they're writing to completely different memory locations, so Arcology can process them in parallel without conflicts. For shared counters like total order counts, we use Arcology's U256Cumulative library which records deltas (+1, +1, +1) and merges them together instead of overwriting, eliminating race conditions entirely.

We built this with Hardhat 3.0.9, which is brand new and uses ES Modules throughout. This meant structuring everything with import/export syntax, using Node.js 22, and implementing Hardhat's new plugin architecture where you explicitly register plugins in an array. The tooling is noticeably faster with their new EDR simulated network - our test suite runs way quicker than on Hardhat 2. We split the functionality across four contracts: OrderBook manages state and orders, MatchingEngine pairs buyers with sellers in parallel, AMMFallback provides backup liquidity using the constant product formula (x*y=k), and Router intelligently routes between the order book and AMM to get best execution.

The particularly hacky part was getting the benchmarks working on Arcology's DevNet. We had to use ethers.js's NonceManager to handle nonce management when submitting 100 transactions simultaneously via Promise.all(), and we added per-transaction logging with detailed per-txn logging because some transactions would occasionally time out. The test script measures TPS, conflict rates, and counter accuracy across three scenarios: orders at different prices, concurrent counter updates, and the stress test with 100 orders at the same price. Getting zero conflicts on that last test took iteration on storage structure, but once we got the price-level separation right, it just worked.

background image mobile

Join the mailing list

Get the latest news and updates